Unit 3: Testing The Republicmac's History

  • Champaign Unit 4 - Center for Family and Community Engagement at the Mellon Building, 703 S New St, Champaign, IL 61820, USA.
  • The projects which expect XML Reports validated by XSD Schema, tools e.g xUnit, should not use versions 2.22.1 and 3.0.0-M1 of the Surefire plugin. For an HTML format of the report, please see the Maven Surefire Report Plugin. The Surefire Plugin has only one goal: surefire:test runs the unit tests of an application.

Students Project Level 3 Project Level 3. Unit 1; Unit 2; Unit 3; Unit 4; Unit 5; Unit 6.

Maven Surefire Plugin

Unit

Unit 3: Testing The Republicmac's History Questions

Requirements: Maven 3.x and JDK 1.7 or higher. Due to wrong formatting of console text messages in Maven Version prior to 3.1.0 it is highly recommended to use Maven 3.1.0 or higher.

This is the road map of the development, see the GH.

VersionsRelease Targets
3.0.0-M1Maven API 3.0 and Java 1.7
Maven Plugin API 3.0
Java 1.7 as minimum
@Component is deprecated. @Parameter should be used instead
Surefire manifest jar classloading broken on latest Debian/Ubuntu Java8
See the Release Notes for the version 3.0.0-M1
3.0.0-M2Fixed: JDK9 and Windows Class-Path issues, 3.0 and Legacy Report XSD, 3.0.0-M2 shadefire
Fixed JDK9+ (Jigsaw) modular paths (module-info.java) having white spaces on file system
Windows slashes appear in relative paths of Class-Path in MANIFEST.MF (Boot Manifest-JAR)
Surefire fails loading class ForkedBooter when using a sub-directory pom file
Plugin fails if used Toolchains together with JDK9+ and (Jigsaw) modular paths (module-info.java)
3.0 and Legacy Report XSD
3.0.0-M2 shadefire
Feature: Option to switch-off Java 9 modules
Option to switch-off Java 9 modules
See the Release Notes for the version 3.0.0-M2
3.0.0-M3Fixed: maven-surefire-report-plugin supports JDK11, JUnit5 issues, deprecated skipTests in Failsafe plugin, used ShadeFire 3.0.0-M2
maven-surefire-report-plugin fails on JDK 11
JUnit Runner that writes to System.out corrupts Surefire's STDOUT when using JUnit's Vintage Engine
Smart stacktrace in test summary should not print JUnit5 assertion exception type
Deprecate skipTests in Failsafe Plugin
See the Release Notes for the version 3.0.0-M3
3.0.0-M4Provided 3 extensions of reporters which can be used to customize XML report, console and file reporters. It is very useful for JUnit5 users.
We reworked the internal implementation so that new commands and events can be easily added. The impl is located in a center point and it is a prerequisite in next versions.
Provided bug fixes for Docker Alpine/BusyBox Linux, JUnit5 and 43 more.
ForkClient attempts to consume unrelated lines
3.0.0-M5Test Report tasks (prerequisite: SUREFIRE-1222 in 3.0.0-M4)
New interprocess communication with TCP/IP which fixed current blocker and critical bugs.
Provided extensions which can be used to customize this interprocess communication in plugin configuration. It is internally used to switch over the pipes and TCP connector.
TCP/IP Channel for forked Surefire JVM. Extensions API and SPI. Polymorphism for remote and local process communication.
3.0.0-M6(1) We will identify the test by UniqueId in SimpleReportEntry and not by the traditional combination of class/method name. (ready for parameterized tests and a coherent re-run)
(2) TestSetRunListener should not cache test event and make any guess about the impl in StatelessXmlReporter. TestSetRunListener should only forward events to multiple reporters.
(3) Fire and consume more events (normal run start/end, re-run start/end)
(4) StatelessXmlReporter repeatedly generates XML report. It is stateful report and won't work if re-run or parallel executions send test events out of order. Prerequisite: 1-3
JUnit 5 in parallel execution mode confuses Surefire reports
ConsoleOutputFileReporter should support parallel execution of test-sets
3.0.0-M7Providers implementation and API
More test events used to negotiate tests to run on particular fork JVM. It is useful in situations when the tests are filtered by group/category or classpath scan with file
filter (fork JVM is preferable over Maven JVM) and used in Test List Processor (3.0.0-M8). We will keep IsolatedClassLoader for extension 'Test List Processor' so that the user
can decide the JVM where the tests will be searched. Possibly JUnit5 provider will be able to scan classes by annotations, see launcher.discover(), and negotiate over the forks.
Surefire unable to run testng suites in parallel
3.0.0-M8Extensions API to customize test-set with test list processor (prerequisite: 3.0.0-M7)
(possibly the scan of classpath based on annotations, currently the scan is done based on pattern of file name e.g. -Dtest=MyTest)
Test list preprocessor support for tests to be run
3.0.0-M9Breaking backwards compatibility with system properties in configuration parameters, removing deprecated configuration parameters and removing deprecated code, etc.

The Surefire Plugin is used during the test phase of the build lifecycle to execute the unit tests of an application. It generates reports in two different file formats:

  • Plain text files (*.txt)
  • XML files (*.xml)

By default, these files are generated in ${basedir}/target/surefire-reports/TEST-*.xml.

The schema for the Surefire XML reports is available at Surefire XML Report Schema.

Unit 3: Testing The Republicmac's History Channel

The XML reports generated by legacy plugins (versions up to 2.22.0) should be validated by Legacy Surefire XML Report Schema.

Two plugin versions (2.22.1 and 3.0.0-M1) however generate 3.0 XML reports they still refer to legacy schema (see noNamespaceSchemaLocation in XML Report). The projects which expect XML Reports validated by XSD Schema, tools e.g xUnit, should not use versions 2.22.1 and 3.0.0-M1 of the Surefire plugin.

For an HTML format of the report, please see the Maven Surefire Report Plugin.

Goals Overview

The Surefire Plugin has only one goal:

  • surefire:test runs the unit tests of an application.

Usage

General instructions on how to use the Surefire Plugin can be found on the usage page. Some more specific use cases are described in the examples listed below. Additionally, users can contribute to the GitHub project.

In case you still have questions regarding the plugin's usage, please have a look at the FAQ and feel free to contact the user mailing list. The posts to the mailing list are archived and could already contain the answer to your question as part of an older thread. Hence, it is also worth browsing/searching the mail archive.

If you feel like the plugin is missing a feature or has a defect, you can file a feature request or bug report in our issue tracker. When creating a new issue, please provide a comprehensive description of your concern. Especially for fixing bugs it is crucial that the developers can reproduce your problem. For this reason, entire debug logs, POMs or most preferably little demo projects attached to the issue are very much appreciated. Of course, patches are welcome, too. Contributors can check out the project from our source repository and will find supplementary information in the guide to helping with Maven.

Examples

The following examples show how to use the Surefire Plugin in more advanced use cases:

-->

There are numerous benefits to writing unit tests; they help with regression, provide documentation, and facilitate good design. However, hard to read and brittle unit tests can wreak havoc on your code base. This article describes some best practices regarding unit test design for your .NET Core and .NET Standard projects.

In this guide, you'll learn some best practices when writing unit tests to keep your tests resilient and easy to understand.

By John Reese with special thanks to Roy Osherove

Why unit test?

Less time performing functional tests

Functional tests are expensive. They typically involve opening up the application and performing a series of steps that you (or someone else), must follow in order to validate the expected behavior. These steps may not always be known to the tester, which means they will have to reach out to someone more knowledgeable in the area in order to carry out the test. Testing itself could take seconds for trivial changes, or minutes for larger changes. Lastly, this process must be repeated for every change that you make in the system.

Unit tests, on the other hand, take milliseconds, can be run at the press of a button, and don't necessarily require any knowledge of the system at large. Whether or not the test passes or fails is up to the test runner, not the individual.

Protection against regression

Regression defects are defects that are introduced when a change is made to the application. It is common for testers to not only test their new feature but also features that existed beforehand in order to verify that previously implemented features still function as expected.

With unit testing, it's possible to rerun your entire suite of tests after every build or even after you change a line of code. Giving you confidence that your new code does not break existing functionality.

Executable documentation

It may not always be obvious what a particular method does or how it behaves given a certain input. You may ask yourself: How does this method behave if I pass it a blank string? Null?

When you have a suite of well-named unit tests, each test should be able to clearly explain the expected output for a given input. In addition, it should be able to verify that it actually works.

Less coupled code

When code is tightly coupled, it can be difficult to unit test. Without creating unit tests for the code that you're writing, coupling may be less apparent.

Writing tests for your code will naturally decouple your code, because it would be more difficult to test otherwise.

Characteristics of a good unit test

  • Fast. It is not uncommon for mature projects to have thousands of unit tests. Unit tests should take very little time to run. Milliseconds.
  • Isolated. Unit tests are standalone, can be run in isolation, and have no dependencies on any outside factors such as a file system or database.
  • Repeatable. Running a unit test should be consistent with its results, that is, it always returns the same result if you do not change anything in between runs.
  • Self-Checking. The test should be able to automatically detect if it passed or failed without any human interaction.
  • Timely. A unit test should not take a disproportionately long time to write compared to the code being tested. If you find testing the code taking a large amount of time compared to writing the code, consider a design that is more testable.

Code coverage

A high code coverage percentage is often associated with a higher quality of code. However, the measurement itself cannot determine the quality of code. Setting an overly ambitious code coverage percentage goal can be counterproductive. Imagine a complex project with thousands of conditional branches, and imagine that you set a goal of 95% code coverage. Currently the project maintains 90% code coverage. The amount of time it takes to account for all of the edge cases in the remaining 5% could be a massive undertaking, and the value proposition quickly diminishes.

A high code coverage percentage is not an indicator of success, nor does it imply high code quality. It just represents the amount of code that is covered by unit tests. For more information, see unit testing code coverage.

Let's speak the same language

The term mock is unfortunately often misused when talking about testing. The following points define the most common types of fakes when writing unit tests:

Fake - A fake is a generic term that can be used to describe either a stub or a mock object. Whether it's a stub or a mock depends on the context in which it's used. So in other words, a fake can be a stub or a mock.

Mock - A mock object is a fake object in the system that decides whether or not a unit test has passed or failed. A mock starts out as a Fake until it's asserted against.

Stub - A stub is a controllable replacement for an existing dependency (or collaborator) in the system. By using a stub, you can test your code without dealing with the dependency directly. By default, a stub starts out as a fake.

Consider the following code snippet:

This would be an example of stub being referred to as a mock. In this case, it is a stub. You're just passing in the Order as a means to be able to instantiate Purchase (the system under test). The name MockOrder is also misleading because again, the order is not a mock.

A better approach would be

By renaming the class to FakeOrder, you've made the class a lot more generic, the class can be used as a mock or a stub. Whichever is better for the test case. In the above example, FakeOrder is used as a stub. You're not using the FakeOrder in any shape or form during the assert. FakeOrder was passed into the Purchase class to satisfy the requirements of the constructor.

To use it as a Mock, you could do something like this

In this case, you are checking a property on the Fake (asserting against it), so in the above code snippet, the mockOrder is a Mock.

Important

It's important to get this terminology correct. If you call your stubs 'mocks', other developers are going to make false assumptions about your intent.

The main thing to remember about mocks versus stubs is that mocks are just like stubs, but you assert against the mock object, whereas you do not assert against a stub.

Best practices

Try not to introduce dependencies on infrastructure when writing unit tests. These make the tests slow and brittle and should be reserved for integration tests. You can avoid these dependencies in your application by following the Explicit Dependencies Principle and using Dependency Injection. You can also keep your unit tests in a separate project from your integration tests. This ensures your unit test project doesn't have references to or dependencies on infrastructure packages.

Naming your tests

The name of your test should consist of three parts:

  • The name of the method being tested.
  • The scenario under which it's being tested.
  • The expected behavior when the scenario is invoked.

Why?

  • Naming standards are important because they explicitly express the intent of the test.

Tests are more than just making sure your code works, they also provide documentation. Just by looking at the suite of unit tests, you should be able to infer the behavior of your code without even looking at the code itself. Additionally, when tests fail, you can see exactly which scenarios do not meet your expectations.

Bad:

Better:

Arranging your tests

Arrange, Act, Assert is a common pattern when unit testing. As the name implies, it consists of three main actions:

  • Arrange your objects, creating and setting them up as necessary.
  • Act on an object.
  • Assert that something is as expected.

Why?

  • Clearly separates what is being tested from the arrange and assert steps.
  • Less chance to intermix assertions with 'Act' code.

Readability is one of the most important aspects when writing a test. Separating each of these actions within the test clearly highlight the dependencies required to call your code, how your code is being called, and what you are trying to assert. While it may be possible to combine some steps and reduce the size of your test, the primary goal is to make the test as readable as possible.

Bad:

Better:

Unit 3: Testing The Republicmac's History Quizlet

Write minimally passing tests

The input to be used in a unit test should be the simplest possible in order to verify the behavior that you are currently testing.

Why?

  • Tests become more resilient to future changes in the codebase.
  • Closer to testing behavior over implementation.

Tests that include more information than required to pass the test have a higher chance of introducing errors into the test and can make the intent of the test less clear. When writing tests, you want to focus on the behavior. Setting extra properties on models or using non-zero values when not required, only detracts from what you are trying to prove.

Bad:

Better:

Avoid magic strings

Naming variables in unit tests is as important, if not more important, than naming variables in production code. Unit tests should not contain magic strings.

Why?

  • Prevents the need for the reader of the test to inspect the production code in order to figure out what makes the value special.
  • Explicitly shows what you're trying to prove rather than trying to accomplish.

Magic strings can cause confusion to the reader of your tests. If a string looks out of the ordinary, they may wonder why a certain value was chosen for a parameter or return value. This may lead them to take a closer look at the implementation details, rather than focus on the test.

Tip

When writing tests, you should aim to express as much intent as possible. In the case of magic strings, a good approach is to assign these values to constants.

Bad:

Better:

Avoid logic in tests

When writing your unit tests avoid manual string concatenation and logical conditions such as if, while, for, switch, etc.

Why?

  • Less chance to introduce a bug inside of your tests.
  • Focus on the end result, rather than implementation details.

When you introduce logic into your test suite, the chance of introducing a bug into it increases dramatically. The last place that you want to find a bug is within your test suite. You should have a high level of confidence that your tests work, otherwise, you will not trust them. Tests that you do not trust, do not provide any value. When a test fails, you want to have a sense that something is actually wrong with your code and that it cannot be ignored.

Tip

If logic in your test seems unavoidable, consider splitting the test up into two or more different tests.

Bad:

Better:

Prefer helper methods to setup and teardown

If you require a similar object or state for your tests, prefer a helper method than leveraging Setup and Teardown attributes if they exist.

Why?

  • Less confusion when reading the tests since all of the code is visible from within each test.
  • Less chance of setting up too much or too little for the given test.
  • Less chance of sharing state between tests, which creates unwanted dependencies between them.

In unit testing frameworks, Setup is called before each and every unit test within your test suite. While some may see this as a useful tool, it generally ends up leading to bloated and hard to read tests. Each test will generally have different requirements in order to get the test up and running. Unfortunately, Setup forces you to use the exact same requirements for each test.

Note

xUnit has removed both SetUp and TearDown as of version 2.x

Bad:

Better:

Avoid multiple asserts

When writing your tests, try to only include one Assert per test. Common approaches to using only one assert include:

  • Create a separate test for each assert.
  • Use parameterized tests.

Why?

  • If one Assert fails, the subsequent Asserts will not be evaluated.
  • Ensures you are not asserting multiple cases in your tests.
  • Gives you the entire picture as to why your tests are failing.

When introducing multiple asserts into a test case, it is not guaranteed that all of the asserts will be executed. In most unit testing frameworks, once an assertion fails in a unit test, the proceeding tests are automatically considered to be failing. This can be confusing as functionality that is actually working, will be shown as failing.

Note

A common exception to this rule is when asserting against an object. In this case, it is generally acceptable to have multiple asserts against each property to ensure the object is in the state that you expect it to be in.

Bad:

Better:

Validate private methods by unit testing public methods

In most cases, there should not be a need to test a private method. Private methods are an implementation detail. You can think of it this way: private methods never exist in isolation. At some point, there is going to be a public facing method that calls the private method as part of its implementation. What you should care about is the end result of the public method that calls into the private one.

Consider the following case

Your first reaction may be to start writing a test for TrimInput because you want to make sure that the method is working as expected. However, it is entirely possible that ParseLogLine manipulates sanitizedInput in such a way that you do not expect, rendering a test against TrimInput useless.

The real test should be done against the public facing method ParseLogLine because that is what you should ultimately care about.

With this viewpoint, if you see a private method, find the public method and write your tests against that method. Just because a private method returns the expected result, does not mean the system that eventually calls the private method uses the result correctly.

Stub static references

One of the principles of a unit test is that it must have full control of the system under test. This can be problematic when production code includes calls to static references (for example, DateTime.Now). Consider the following code

How can this code possibly be unit tested? You may try an approach such as

Unfortunately, you will quickly realize that there are a couple problems with your tests.

  • If the test suite is run on a Tuesday, the second test will pass, but the first test will fail.
  • If the test suite is run on any other day, the first test will pass, but the second test will fail.

To solve these problems, you'll need to introduce a seam into your production code. One approach is to wrap the code that you need to control in an interface and have the production code depend on that interface.

Your test suite now becomes

Now the test suite has full control over DateTime.Now and can stub any value when calling into the method.