Chapter 6 activity

What two properties must be satisfied for an input domain to be properly partitioned? 

– The partition must cover the entire domain (completeness)
– The blocks must not overlap (disjoint)

What is an Input Domain Model (IDM)? 

An input domain model (IDM) represents the input space of the system under the test in an abstract way. It helps the test engineer to define and create better tests by establishing the structure of the input domain and partitions of this domain.

What gives more tests, each choice coverage or pair-wise coverage?

Pair wise, as it does a permutation for a value from each block with one of each block of the other characteristics.

Anuncio publicitario

Chapter 4 and 5 activity

After reading chapter 4 and 5 from Introduction to Software Testing, we were asked to answer these questions about TDD and test coverage criteria:

  1. What is “correctness” in agile processes?
    Correctness for testing and software engineers has to do with the quantity of green tests, this means that our code is correct if it passes all the tests.
  2. Do TDD tests do a good job testing the software?
    No, TDD tests are mainly to define the requirements and specifications of our code, but not really to evaluate the right behavior or to cover edge cases.
  3. Can we automate our tests without TDD?
    Yes, automation can be done in test without following the TDD principles. Imagine writing first the code, then just executing the tests to validate it.
  4. Can we use TDD without automating our tests?
    Yes, TDD consists on putting the tests first as a guide for the code to be written, automation can help to write tests and validate faster, but is not mandatory.
  5. What four structures do we use for test criteria?
    1.- Test Requirement: A test requirement is a specific element of a software artifact that a test case must satisfy or cover.
    2.- Coverage Criterion: A coverage criterion is a rule or a collection of rules that impose test requirements on a test set.
    3.- Minimal Test Set: Test set T such that if a test was removed from T, it no longer satisfies all test requirements.
    4.- Minimum Test Set: Smallest possible test set T that satisfies all test requirements.
  6. What usually prevents our tests from achieving 100% coverage?
    What usually prevents our tests from achieving 100% coverage?
    Test requirements that cannot be satisfied are called infeasible. Formally, no test case values exist that meet the test requirements. The detection of infeasible test requirements is formally undecidable for most coverage criteria, and even though researchers try to find partial solutions, they have had only limited success. Thus, 100% coverage is impossible in practice.

  7. Some organizations in industry who adopt TDD report that it succeeds very well, and others report that it fails. Based on your knowledge of TDD and any experience you have, why do you think it succeeds sometimes but not all?
    Companies does not use TDD well, that’s why they don’t succeed using it. It is hard for some engineers to follow TDD principles, some of us are not used to write the tests first before the code.
  8. A few software organizations use test criteria and report great success. However, most organizations do not currently use test criteria. Based on your knowledge and experience, why do you think test criteria are not used more?
    We think that test criteria is very ambiguous, as we don’t have an exact default way of measuring test criteria, everyone does it differently. Maybe having a high test coverage is expensive and most of the times is not worth it, that’s why most companies decide to not focus on having  heavy testing development.

Chapter 4 activity

For this post we are going to show how we solved the exercises for chapter 4 using TDD. We need to add new operations like multiplying and division for the file. The test file only has a test for the add operation as it is the only function we have in the class.


First of all, we need to write simple tests that describe what our new operations do. Of course, these tests should fail as we haven’t coded the functions yet.


Now we can start writing the functionality now, we are going to start with the subtraction as it is the first test we should pass to move to the next one. On this exercise the order we decide to write the new functions does not matter, but in more complex projects we should have a specific order to write the new functions because some tests might have multiple dependencies.


The next step is to write the test for the divide function. Here we have a problem, we don’t know how to implement this function because it can return a double or an integer. To follow the TDD correctly first we have to make the test pass, right now the test only ask for an integer so we will write just the code to make that correct.


At last, the multiply function. Here again we ask ourselves if this function should be capable of accepting float values, but it is not specified on our tests so by now we will handle only integers for the input and output.



Now lets imagine some guys from testing added new tests, now with more specific cases trying to find faults in our code. They included a test that divides two integers but the result is non integer, a test for a division by zero,  and a test with negative integers. After running the tests this happens:


As we can notice, only the division function is not passing the new tests. On one test it is expecting float numbers, on the other test, it is expecting us to catch an error when the divisor is zero. This means that we need a refactor to return doubles and to detect divisions by zero. This will change some of our previous tests, the first division test is working but it is expecting integers, the new test is expecting  doubles, there is an inconsistency in the tests and we should define this with the team first. The final decision is that a division should return doubles because it gives a more exact result. With that information we can go back to the tests, modify them accordingly and now start working on the refactor.


With this exercise we practiced TDD and refactoring. We learned how important it is to have good tests that defines the requirements for each function and that show how specific behavior should be handled. Also, it showed us how in TDD the tests are the guideline of our code and that it can show design errors or inconsistencies before we start writing the code, a huge advantage over other methodologies.

The code of the exercise is available here:

Week 4 progress

During this week we have discussed a lot about the project and the tools we are going to use from now. As we are working with a different development and we will only be doing the tests, we should work with tools that the dev team is comfortable using. The project is a web application, something that most of us have done before, but in different ways and using different tools. After talking a little bit with the other part of the team, we agreed to use VueJS instead of React for the front end, mainly because is a framework that most of use are familiar with. The database is going to be done with PostgresSQL, that already has it’s own testing framework. For the API the team decided to go with Golang and Travis for integration testing. The architecture looks like this:


The set-up is already available at the dev team repositories here:

Client –




Chapter 3 exersises

1.- Why do testers automate tests? What are the limitation of automation?
Developers automate testing to reduce costs and reduce human error, as well as making regression testing easier by allowing a test to be run repeatedly with the press of a button.
2.- Give a one-to-two paragraph explanation for how the inheritance hierarchy can affect controllability and observability:
Inheritance can make following input and output hard, since you have to keep track of multiple class definitions to be able to get an holistic view of what is going on when the tests are running. Sometimes there will be inheritance graphs so long that it feels like going up and down like a yo-yo, this yo-yo problem term was defined in «Problems in Object-Oriented Software Reuse» by Taenzer, Ganti and Poedar.
3.-Develop JUnit tests for the BoundedQueue.class.
public class BoundedQueueTest {
     * Test of enQueue method, of class BoundedQueue.
    public void testEnQueue() {
        Object o = 1;
        BoundedQueue instance = new BoundedQueue(5);
        assertEquals(instance.toString(), "[1]");

     * Test of deQueue method, of class BoundedQueue.
    public void testDeQueue() {
        Object o = 1;
        Object a = 2;
        BoundedQueue instance = new BoundedQueue(5);
        assertEquals(instance.toString(), "[2]");

     * Test of isEmpty method, of class BoundedQueue.
    public void testIsEmpty() {

        BoundedQueue instance = new BoundedQueue(5);

        assertEquals(instance.isEmpty(), true);

     * Test of isFull method, of class BoundedQueue.
    public void testIsFull() {
        Object a = 1;
        BoundedQueue instance = new BoundedQueue(5);

        assertEquals(instance.isFull(), true);

     * Test of toString method, of class BoundedQueue.
    public void testToString() {
        Object o = 1;
        Object a = 2;
        Object b = 4;
        BoundedQueue instance = new BoundedQueue(5);
        assertEquals(instance.toString(), "[2, 4, 1]");
4.-Delete the explicit throw of NullPointerException in the Min program. Verify that the JUnit test for a list with a single null element now fails.
public class MinTest {
* Test of min method, of class Min.
    @Test(expected = NullPointerException.class)
    public void testMin() {
        List list = new ArrayList();
        Object result = Min.min(list);
6.- Using the class PrimeNumbers describe 5 tests using the methods of the class to show the next points:
a) A test that does not reach the fault
b) A test that reaches the fault but does not infect
c) A test that infects the state but does not propagate
d) A test that propagates but does not reveal
c) A test that reveals a fault
For a test where we dont reach the fault we can call the method computePrime with a parameter less or equal to 0, being «computePrime(0).

This will never allow us to enter the while where we do the current operations and where the current fault is located.
For our second test, if we call the method with a parameter for example 3, being «computePrime(3)», the program will run the part of the code where our fault is at, but since the number doesn’t end with a 9, it doesn’t infects.
For our thrid test, if we call the method with a parameter that 19, it wont add the 19 as a primer number cause of the fault in the program. But since we are not executing the toString() for our PrimeNumbers class, even if it infects, it wont propagate and the user wont realize, even if we allready have a wront value thanks to the fault.
For the fourth test, would be the same as the last case, but this time we would use the method toString() causing this to make the user realize about whats wrong in the execution (if it has some of the values to compare with the current result).
Only when the user realizes of this (by the same test result comparation) would be in the reveal stage, if not, it would stay in the propagate stage.
7.- Recode the class using the Sieve Approach, but leave the fault. What is the first false positive, and how many «primes» must a test case generate before encountering it? What does this exercise show about the RIPR model?
If you want to see the exercises’ code in our repository you can visit it here:

In Class Test


  1. Given the 4 @Test methods shown, how many times does the @Before method execute?
    Just one, it initializes global variables. In this case, we are creating objects from the class we want to test.
  2. The contract for equals() states that no exceptions may be thrown. Instead, equals() is supposed to return false if passed a null argument.
    Write a JUnit test that verifies this property for the EH class.

    @Test public void noNPE() {
           assertEquals(false, eh1.equals(null));
  3. Using the given EH objects, write a test that verifies that equals() returns false if the objects are, in fact, not equal.
    @Test public void equalsFalse() {
           assertEquals(false, eh1.equals(eh2));
  4. Using the given EH objects, write a test that verifies that equals() returns true if the objects are, in fact, equal.
    @Test public void equalsTrue() {
           assertEquals(true, eh1.equals(eh3));
  5. Using the given EH objects, write a test to verify that hashCode() is consistent with equals. This test should fail if hashCode() is commented out (as shown), but pass if hashCode() is implemented.
    @Test public void hashConsistent() {
           assertEquals(true, eh1.hashCode() == eh3.hashCode());

Planning Week 4

At the start of the semester we planned that during this week we will be setting everything up to test the project continuously in the development. The project will now be only a web application, so we need to install all the essential tools to test JavaScript code, front-end functionality and data base communication.

At the end of this week we should have everything ready in the repository and some demo tests working. This includes the Travis set-up that will allow us control pull requests automatically.

Our firsts tests

For this week our job was to learn to use the selected tools for testing. Carlos did a little bit of research for testing in Go, it’s going to be useful for the back-end. Jesus learned how to use Espresso to test android UI. Adler installed some tools from the NightmareJS for the front end and me, José, did a tutorial to install and use Junit4 for Java. Unfortunately, the project changed focus and we are not going to develop an Android app, so Java and Espresso are not going to be useful anymore. We are going to be developing only a web app. The good news is that we are already prepared with the research that Carlos and Adler did. Next week we hope to be installing and using demo tests in the repository of the project and perhaps do all the essential starting configuration.

Our code for junit is available on our GitHub here.

Planning week 3

This week we are going to do a little bit more of research on the selected frameworks and tools we decide, as we stated on the semester plan. For this project we decided that we are going to use mostly Java for Android, SOLID for the database and Go for the back-end. Each one of us is going to learn through tutorials and guides a specific tool for testing and at the end of the week we are going to write down the conclusions in another post. For this week the assignments are:

Carlos Alberto Rueda: Do research of a Go testing tool, pick one and learn to use it.

Jesus Alberto Alvarez: Learn to use Espresso for testing Android UI.

José Carlos Peñuelas: Learn to use JUnit for automated Java tests.

Adler Zamora: Learn to install and use NightmareJS for several front-end tests.

It’s important to say that this doesn’t mean that this is not what each one if use is going to work on the whole semester. We just want to learn a little bit about setting up the tools and add them to our project from the beginning in case that we need them. Some things might(will) change or some tools are not going to be needed. Also, we should be able to explain how to explain the usage of a testing tool to our partners so that they can write tests too. The point of this is that whenever we need more work on one part of the project we could help others to test on any framework we use.

Project description

The project we are going to be working this semester is about smart cities. The objective is to build a platform that collect data around a specific city using tools like ArcGIS and ReactJS. The platform will also create tours of the city using the data and analysis and then guide the user trough the city using the GPS localization of the user’s phone. We need to be able to create, update and delete walk registers of the users and for that we may need a database to save all those registers, we are not sure if that is a requirement for the project but we will take it into consideration for the research that will be described in the next paragraph.

Mostly the languages that we are going to be working with are JavaScript and Java and some tools are the already mentioned ReactJS, ArcGIS and probably NodeJS. We are still not sure if we will need an SQL or no-SQL database but it would be good to know frameworks or tools for integration testing. With that information our team looked for some testing frameworks that will allow us to make testing easier for us in every aspect of the development:

Espresso is a powerful tool for testing purposes for android. Made by google it allows for testing of android applications by simulating app interaction in your test cases. It even allows to test interactions between your app and others. By using assertions and telling espresso where to click and what to input it allows app flow testing and general responses.

For unit testing, as android is mainly done with Java, JUnit should be the go to tool for being able to test the code and have easy test automation. It is easy to implement and helps to develop by TDD.

Mocha is a feature-rich JavaScript test framework running on Node.js and in the browser, making asynchronous testing simple and fun. Mocha tests run serially, allowing for flexible and accurate reporting, while mapping uncaught exceptions to the correct test cases.

Since we will be using ReactJS for our project as the frontEnd framework, we need to really take in consideration the different tools we could integrate and use for its proper end-to-end testing. The ideal scheme is for users to constantly be using our system, so the most we focus on developing a high quality software, the best product will give to our customers.
We did a research and found out that we could start with diferent tools depending on the most accurate aproach.

NightmareJS is a high-level automation library wich has:
Niffy which detect UI changes and bus across releases of your web app, Daydream which is a chrome extension to record your actions into a puppeter script & Electron which helps you generate a cross-platform desktop app using html, css, js.

Enzyme is a tool that goes great with Mocha. It is a JS testing utility for React that makes it easier to assert, manipulate and traverse a React Components’ output. It is said to be easy to learn (both Mocha and Enzyme) even for new users in a short period of time.

In class we talked about the goals on testing and mentioned Verification and Validation. Well, there is this tool called Protactor, which is for acceptance testing which means that a system is being tested to evaluate the compliance with the business requirements (or the validation stage). The bad thing about it is that it was originally designed for Angular, although it is possible to configure it to work with React. It also helps you testing your app against a wide variety of software.

We are still not sure what kind of database we are going to use for the project, but if we had to guess, we are probably going to need a no-SQL database because the data we will manipulate may be big or unstructured.  MongoDB is a database program that is based on documents, it is said that it’s easy to use and to set up in almost any project. To do testing easily in a MongoDB database there is Mongo Orchestation. Mongo Orchestration (MO) is an HTTP server providing a RESTful interface to MongoDB process management running on the same machine. This will help us to automate test for simple requests using mocked server responses.