If you’re working in a repository that contains more than one version of your code, then here’s an approach to manage adding a feature that should appear in multiple versions.
This will often happen with micro services, once a breaking change is introduced and all clients can’t immediately upgrade. This could also happen with a library that needs to support multiple versions of an underlying framework.
If there’s no build configuration with embedded versions in it, then this could be as simple as adding the feature starting on one version’s branch, and then merging the commit into both version’s branches.
But more often then not, there will be differences between the 2 branches to generate the appropriate versions, and this will be embedded in build files. Think Maven pom.xml files or Gradle build.gradle files.
This post describes an approach that allows for using mandatory Pull Requests through something like Github, Gitlab or Bitbucket.
First, add the new feature to a branch originating on one of the version’s master branch. In this case, it was added to Version 1’s master branch.
Once the feature has been developed, create a Pull Request and once approved, merge it into Version 1’s master branch.
Odds are it won’t be possible to create a Pull Request for the feature branch into master-v2 as there will likely be conflicts in the build configuration.
In order to overcome this, we will have to take some control over the merge process.
Create a new branch rooted at master-v2. Call it something like feature/merge-add-stuff. Then merge feature/add-stuff into feature/merge-add-stuff, resolving any conflicts that occur. Remember that this is effectively a merge into the master-v2 branch, so any conflict resolution should be to ensure the code works for Version 2.
Perform any tests, and any additional fixups required to incorporate this change into master-v2. I leave it to you if you want to use
commit --amend so that the merge looks like it was done in one go, or if you wish to add fixups as new commits. I prefer the former, but that’s a team decision.
Once all conflicts are resolved, push the feature/merge-add-stuff branch to your central server, and create a Pull Request from feature/merge-add-stuff to master-v2.
Once it is approved, merge it into master-v2, and delete the feature/add-stuff branch.
Ideally, your tool will allow for a fast-forward merge, so then it will appear as if you directly merged feature/add-stuff into master-v2, rather than introducing an additional commit, but again, this is a team decision.
Mocking is a powerful aid to TDD and ensuring that edge case testing can be done without requiring downstream dependencies.
This is especially valuable when testing interactions with something like a Database.
Mocks also allow for verifying interactions and that parameters are constructed correctly.
The downside is they can create tight coupling between the test and the implementation, thereby reducing the ease of refactoring.
But this article will not focus on proper use of Mocks. The focus is comparing two commonly used mocking frameworks for Kotlin tests.
For Java development, Mockito is the most commonly used library. For Kotlin, Mockito works very well BUT a few extra libraries make the interactions much easier.
mockito-inline for mocking final classes
mockito-kotlin that provides some helpers for more idiomatic code, and to address some incompatiblities.
e.g. when -> whenever. issues with any() when parameter is nullable. and adds some helpers.
Using Mockito Extension in JUnit Jupiter, and annotation based mocking results in strict checking. Basically, if no expectations are defined on a mock object, it will act like a stub. This means it will return a reasonable default for any calls.
As soon as an expectation is defined, the Mock becomes strict. It will report any calls that don’t match a defined expectation AND will report any expectations that were defined, but not called. This is like defining verify on every interaction, and adding a verify no more interactions.
There is no option to ensure a mock is actually used as a mock until at least one interaction is defined.
The only way to ensure all mock calls are defined would be to call verify with noMoreInteractions on all mocks at the end of the test.
The strict enforcement does not appear to work if you call
mock directly, regardless of whether you’re using the MockitoExtension or not. Either in the init of the class or in an @BeforeEach function. I have not dug in to determine why, but that would be my preferred method of defining as then all variables could be
val instead of
This is a Kotlin specific framework, also written in Kotlin. This provides some advantages as it can fully leverage the power of Kotlin, and provide a more idiomatic approach to mocking.
It contains built-in features for mocking final classes, extension functions, coroutines, constructors, and private functions.
There is even work started to support multi-platform.
Mockk makes a distinction between stubs (relaxed mocks) and mocks (strict). If a mock is defined as relaxed, it will provide a reasonable default for any calls.
A strict mock must be told about any interactions, so if an unexpected call is performed on a strict mock, a very informative error is shown.
Mockk does not automatically perform verification (like Mockito does using the MockitoExtension/Strict Stubbing). The only way to accomplish that is to explicitly define the required verify statements.
Mockk has a Rule/Extension for use with JUnit that allows for annotated parameters. For JUnit Jupiter, the extension also allows for annotation of fun parameters.
My preference is to use
val properties and call
mockk directly rather than having to use
lateinit var and the @Mockk annotations.
I like the explicit definition of stub vs mock of Mockk, it’s knowleddge of Kotlin and the good error messages if an interaction is not defined.
I miss the automatic verification to ensure all interactions are explicitly defined.
Using MockitoExtension and @Mock annotations provides automatic verification. So if that’s what you want, that’s the way to go.
MockitoExtension w/Jupiter does not allow for injection of Mocks into constructor or methods. It only allows for @Mock annotations, and Strict verification of mocking.
If you want performance, and are ok with vague error messages or no warnings on incorrect mocks, you can use the mock() function instead of annotations
Mockk seems like a better compromise. Can manually put verification of interactions on mocks, and no lateinit var using mock. Also allows for explicit expectation definition vs returning reasonable default values.
Problem is Mockk seemed to be slower running tests, but both take a bit of warm up for the first test.
MockK extension for Jupiter does allow for injection of mocks in constructor or methods and does standard verification using
mock() so any approach results in the same use cases/verifications.
There are tickets open on Mockk to investigate performance, and some suggestions. It’s not a terrible difference, and with any luck, it will be addressed
Overall, I lean to Mockk for any new projects. Explicit definition of stub vs mock, good error messages for unexpected interactions, better JUnit Jupiter support, and good understanding/support for Kotlin out of the box.
I would miss the strict verification of Mockito, but I think that’s a reasonable tradeoff
Great article on good practices to adopt for test writing and Kotlin.
In Part 1, I wrote about the many benefits of Kotlin for writing better, easier to understand code, and eliminating third-party libraries and tools.
This post will focus on Code Quality, what Kotlin has to improve quality, and tools available for Static Code Analysis.
Not surprisingly, all teams/organizations are concerned with code quality. Reading books like “Effective Java” by Joshua Bloch and “Code Complete” by Steve McConnell, testing, code reviews and pair programming are all items that can improve the quality of code.
Anyone coding with Java should at least be aware of the “Effective Java” book by Joshua Bloch. It details a number of strongly recommended practices for code development in general, and Java specifically.
The developers of Kotlin have as a stated goal to implement all of Effective Java into the compiler/language. Not everything is covered, but the major cases are.
var to clearly mark if something can/should change or not (prefer final).
Classes are closed (final) by default, and must be explicitly opened, thereby prohibiting inheritance unless it is designed for.
The compiler warns of unused items in a class/object, provides explicit null management via the type system, can enforce nullability from Java code if it’s annotated appropriately (JSR-305 and the other flavours of annotations).
The other main tool used by many Java shops is Sonarqube. This provides static analysis of the code base, and is performed as part of the Continuous Integration phase. i.e. on a build server after code has been pushed.
This can be a sore spot for many developers as they think they’re done their code, and depending on pipeline speed, don’t get notified of issues until well after making a Pull Request. Given there are tools like SonarLint, this doesn’t have to be the case, but it does require additional ‘effort’ by the team to ensure SonarLint is installed in their IDE, and they monitor the status.
For Kotlin, Sonarqube does not currently have official support. This is unfortunate, as it will prevent some organizations from adopting Kotlin.
There are static analysis tools available for Kotlin, including the analysis built-in to IntelliJ. If you use IntelliJ for Git interations too, ensure you enable the ‘Perform code analysis’ checkbox on the Commit dialog, and review the warnings it raises. IntelliJ will also show suggestions when coding, so there shouldn’t be any items left by the time you’re committing.
The tool my team has adopted is Detekt. This tool is constantly improving, and provides the ability to create custom rules.
It can be run standalone, or via a plugin for Maven or Gradle. I prefer the plugin approach, and have it execute after build and unit test, but before Integration tests.
By integrating it directly into the build, the team is assured that Detekt is always run on the code. Any issues are raised directly during development, and it’s always easiest to address things during development, while you’re in the context of the code.
Detekt configuration is quite easy. My team has recently adopted the approach of having all rules on by default, and then modifying/disabling only those specific ones that we feel should be different. As Detekt doesn’t currently provide a mechanism for a central configuration for rules, this simplifies ensuring all projects are conforming to the same standards.
If too many issues exist in the code base, Detekt will break the build. This forces the team to address code quality issues NOW, rather than letting them accumulate on a dashboard, never to be addressed. This also means the team decides what rules are important to it, and which ones it doesn’t want.
The author of Detekt has also created a plugin for Sonarqube. This plugin is not yet available through the Sonarqube Update Centre, so may be a problem to get installed at your corporation. It needs to be built from source, and installed into the plugins folder on the Sonarqube server. I was able to have this done at my employer as the decision to support Kotlin has been made, and having stats on the Sonarqube dashboard is also a pre-requisite.
Once the plugin is installed, Sonarqube will execute Detekt on any projects that contain Kotlin code. The results will appear as expected on the dashboard.
The plugin will also show Jacoco code coverage information on the dashboard. To get the Jacoco coverage information appearing, add the following to your Gradle coniguration for Sonarqube:
By default, Sonarqube will use the Detekt configuration incorporated into the plugin installed on Sonarqube. At the time of writing, there is no option to customize the configuration used by Sonarqube via the Sonarqube UI.
In order to ensure consistency, my team configures Sonarqube to use the same configuration file that is used during the local build. This is performed by adding the following to Sonarqube properties in the build.gradle file. The following will use the file ‘detekt.yml’ located in the root directory of the project.
Using the above, I have been able to ensure code quality in the Kotlin code base, and comply with organization requirements. The experience of developing Kotlin code has been very good overall, and it keeps getting better. Jetbrains, and the community, are very active. Constantly improving the tooling, more documentation, more podcasts. Everytime I’ve run into an issue, I’ve been able to find a solution with a quick search. Kotlin Forums, Stackoverflow or blog posts exist.
Pivotal is improving their support of Kotlin in the Spring Framework. Spring Framework 5.0 has been fully updated with @Null annotations, as well as helpers to allow for more idiomatic Kotlin usage of Spring.
Spring Boot 2.0 (currently M7) contains a number of enhancements to support Kotlin as well.
Google provided a huge boost to Kotlin, and now most teams developing Android apps are at least investigating Kotlin.
It’s definitely a promising language, with great community, and tool support behind it!
A great collection of situations and solutions for Git. Constantly evolving as it is hosted on Github and takes additions.
If you see something missing, and know the solution, please add it.
A great video summing up 55 New Features in JDK9. Well worth watching, as the presenter covers major and minor features, as well as deprecations.
Improvements to Collections including new, fast construction of List, Set and Map (finally). Discussions around Jigsaw, and a new linker that will allow creation of a module that ONLY contains the dependent jars/modules required.
Awareness of Kotlin drastically increased after Google I/O, but if you’re not developing on Android, why should you care?
Well, let me tell you. Kotlin is the Java we’ve been waiting for.
• Rely on your IDE to generate getters/setters/equals/hashCode?
• Use Lombok for defining data classes?
• Write lots of null check code, and lots of tests to verify null handling?
• Use annotations to verify null handling?
• Add external static analyzers for null verification?
• Leverage Optional (but only for return values, right?)
• Use Guava to provide a number of utilities?
• Use Apache commons libraries for many utility classes?
• Create utility classes that are effectively extensions for a Class?
• Create many overloaded functions in lieu of default parameters?
• Want to create DSL’s?
Then perhaps it’s time to switch to Kotlin. Yes, there’s some overhead in learning the language, and the compiler is a bit slower than Java’s, but given the items in the list above that need to be learned or known, and the extra processors for code generation and analysis, I think it’s a worthwhile transition, and ultimately isn’t any slower.
Data classes are built into the language. Getters, Setters, equals, hashCode and clone are all defined for it. Now when you’re reviewing code, you know immediately what’s defined for this class. No need to confirm that nothing is done in the setter/getter. No need to verify the implementation of equals/hashCode, or ensuring it has both.
If you use Lombok, you don’t have the above worries either, BUT you do require a plugin for your IDE, and additional build configuration to ensure the Lombok generator is executed.
Example data class that contains an immutable birthDate and mutable name property.
data class MyPojo(val birthDate: String, var name: String)
Null verification is built into the language. Just define the type, and it is non-null. To let the compiler know a variable is nullable, append a
? to the type. Very simple and readable.
The same applies to return values. No need to use a wrapper class (Optional) with all the new learning to use it property. (isPresent is not proper usage)
The compiler will verify via flow analysis, and unless you turn it off, verification will be performed at runtime.
No need to add annotations for non-null, or rely on conventions that all parameters are non-null. No need to wire in additional analysis/generation tools. No need to write code to verify parameters are non-null.
The standard library contains many extensions to common classes. Everything I’ve ever needed from commons-lang3 is in the Kotlin standard library. Guava Predicates is commonly used. Once again, this type of functionality is built into the Kotlin standard library.
And for comparison, Guava 22.0 is 2.6MB, while commons-lang3 3.6 is 495KB for a total of 3MB
while Kotlin stdlib is 881KB, stdlib-jdk8 is 12KB and stdlib-jdk7 is 3KB for a total of 896KB. Quite compact considering what it provides.
Kotlin provides extension functions so that you can provide additional methods on a Type whether it’s in your code base, or in a library. When writing code, extension functions look as if they’re defined on the Type (hence the name).
// Note, this already exists in stdlib
Hopefully you see that Kotlin encapsulates a number of best practices and standards directly into the language and compiler, rather than on external libraries and tools regularly applied doing Java coding.
And this is only the beginning of the features that Kotlin provides that make code easier to review, and reduce the chance of common errors creeping into the code.
I will create follow up posts to cover more features/benefits of Kotlin. Next will be static analysis.
Must watch video by Jez Humble - Continuous Delivery in Agile. The Last 20 minutes on Diversity are very impactful.
This was presented at Agile2017, and was primarily about Continuous Delivery. Jez does an excellent job explaining why this can be done anywhere, and debunks 4 common reasons given for not being able to do it ‘here’.
The last 20 minutes are very powerful. It needs to be it’s own talk on Diversity in technology as it is so compelling.
The talk was driven in response to James Damore’s letter to Google. Jez did a lot of research, and tears apart the letter, while providing his own explanations for why the lack of diversity exists.
The main determination is this. People believing that a skill is innate, and propagating that belief leads to alienation. Alienation drives people elsewhere.
This exposes itself as “This is too hard for you”. “You weren’t born with this ability” and similar thoughts.
He then shows a chart of female percentages in the Medical, legal, physical sciences and Computer Science fields. All 4 have been steadily increasing since the beginning of the chart. Then the 80’s arrive and it’s all down hill for CS. The others continue upwards and are close to 50%. Why? Please watch the video to see Jez’s explanation.
Everyone knows about the shortage of software developers. Here’s a large portion of the potential workforce, but they have no interest. This MUST change and it starts with each and everyone of us!
I always try and keep my commits focused on one thing. This was brought to my attention recently as I was doing a code review. A number of refactors mixed in with code changes.
Makes reviewing much harder to perform, and harder for both developer and reviewer to ensure things are correct.
Git has many features for helping with this. Find a refactor part way through a change, then pull out the refactor portion so it can go into its own commit, and continue with your code changes.
Doing a large refactor and want to make it smaller? Again, git supports that.
When coding, think about both your validation of the changes as well as a code reviewers view. As soon as there are more than 100 lines of code in a commit, and especially if there are a number of contexts, the review quality goes way down.
An interesting take on why your employer should support you contributing to Open Source, and why contributing to Open Source doesn’t have to mean using personal time.
If you’re a programmer today, you’re almost certainly taking advantage of Open Source libraries. These libraries are saving you (and by extension, your employer/clients) much money, time and effort.
Therefore your employer/client should be ok with you contributing to the libraries during working hours. Obviously it will be easier if you focus on libraries that are in use for the project, and preferably addressing issues you’ve encountered, or adding features that are valuable to you.
The other benefit is exposure to other’s code, peer reviews, and all around improvement of your own design and coding abilities.
Something to think about, and talk with your employer/client’s about if they don’t currently allow it.
Groovy is fantastic. Remind we why I still program in Java?
I’ve been watching some Groovy videos from the recent SpringOne2GX conference. I’ve always really liked Groovy, and use it for all my personal projects. It’s so much more succinct, and the tools around it are very powerful.
After watching these videos, and seeing even more power available, as well as the fantastic enhancements around Type Checking and static compiling, I’m left asking myself why I code in Java at all.
Almost all valid Java code is valid Groovy code, so it’s easy to transition at your own pace. Groovy then allows you to remove so much ceremony, plus has so many powerful additions. AST transforms for creating an Immutable object. Defining a POJO while only having to define the properties. Powerful DSL’s that if written correctly, are Type checked at compile time, rather than at runtime.
Spock testing framework combines all the power of Junit, Mockito and JUnitParams in an even easier to read and use DSL for testing.
Even if you use Spring Boot, there is very good support for using Groovy everywhere. The main language, as the templating language (the new Groovy MarkupTemplate DSL is fantastic. Type checked at compile time rather than runtime, and optimizations performed by the compiler to improve performance at runtime).
Replacing Spring Data with GORM (from Grails). Spring Data allows for reflection based methods, BUT these aren’t checked until run time, and are limited in their functionality.
Using the latest GORM DSL, the queries are checked at compile time, and are much more flexible and readable. Instead of relying on a very long method name, a closure is used to define the request.
This will ensure the People object has a name property and an adult property at compile time.
DSL’s written correctly, are type checked at compile time, and your IDE can provide code assistance. Very powerful tool.
Watch the videos below and learn more about Groovy. I intend to incorporate it into my next project, and don’t want to code Java again.