Dependency injection and 100% test coverage

Hey folks, have a question regarding dependency injection
and 100% test coverage.

Suppose you have a function which itselfs calls some functions of a foreign package,
you definitely not want to be called in your tests. Let’s call them evil functions
for the sake of it.
The foreign package does not provide any struct or interfaces for those evil functions.
So you have to create your own interface with methods that act as wrappers for those “evil” functions.
Then you build two implementations of this interface: one for normal builds which
actually wrapp those evil functions. And a mocked version for your tests which do no harm.

So far this seems to be the normal thing to do (at least for what I have read so far),
but what about test coverage? You can achieve 100% test coverage on methods using
the interface, but you can’t get 100% on the interface implementation for the real
build, since those are calling those damn evil functions. You can’t test those functions
at all. No blog post I read covered this problem, sadly.

How do you handle cases like this?

2 Likes

Hey Stereodude!

I have struggled with this same problem. I’m curious what makes these functions “evil”? In my case these “evil” functions connected with 3rd party services that had no “sandbox” environment, so calling them was bad unless we were in our production build.

I ended up wrapping these functions as best as I could using the method you described above, but at a certain point I could not wrap and test any further. I attempted to make the untested code as small as possible, but eventually I had to stop and leave some code untested.

I’m not sure if there is a way around this or it is just an inherent problem of these “evil” functions.

They are not really evil, just unwanted in unit tests like the ones you mentioned. I used this term to differentiate between testable and non-testable functions but that’s maybe a bit misleading :sweat_smile:

In my specific case they are functions from the chromedp package which spawn chrome processes, which I don’t want to happen in unit tests.

A possible but very dirty solution which came to my mind was to put the wrapper implementation into a seperate package and use go test in package mode while listing every package other than that. Not very convinient though. Ideally I would like to have go test ./... to cover 100% of my code.

1 Like

Haha oh I see :grin:

Hmm yeah that would seem to be a workaround. I looked into ways you could skip these tests you don’t want to run and came across this stack overflow post that seems pretty helpful.

Looks like you can use an environment variable in combination with the testing.Skip() function to skip these tests or use the testing -short flag.

That way you should still be able to run go test ./… and it will ignore the tests your don’t want to run unless you specifically change your Env variable or provide the -short flag.

1 Like

According to this Blog, if you are testing for a package, then cross-packages are not included within the coverage itself.

Another way to consider (if you have not) is build modes. Say “plugin”

Another aspect is how you are implementing D.I. If you can explain how you have achieved D.I, manipulating that can allow for a effective solution perhaps. Example: I can generate , my "import"s dynamically (compile time DI). What cant be imported cant be considered part of the coverage I guess.

Just sharing options researched. apologies if none apply due to my lack of a deeper understanding of your problem, but curious!

Thanks for your input @connerj70 and @ArjunDhar!
Sorry I did not include any example code as I thought this could be a bit lengthy to describe the problem, but let me try (more or less pseudo code):

Suppose you want someMethod() tested:

type SomeStruct struct {}

func (s *SomeStruct) SomeMethod() {
    doSomeThings()
    someimport.DoThingsYouDontWantInTests()
    doSomeOtherThings()
}

In order for someimport.DoThingsYouDontWantInTests() to not be executed in tests you create an interface and an implementation wrapping these functions:

type Wrapper interface {
    DoThingsYouDontWantInTests()
}

type ImplementedWrapper struct {}

func (w *ImplementedWrapper) DoThingsYouDontWantInTests() {
    someimport.DoThingsYouDontWantInTests()
}

type SomeStruct struct {
    w Wrapper
}

func (s *SomeStruct) SomeMethod() {
    doSomeThings()
    w.DoThingsYouDontWantInTests()
    doSomeOtherThings()
}

So in a test you can do depency injection on SomeStruct.w:

type MockedWrapper struct {}

func (w *MockedWrapper) DoThingsYouDontWantInTests() {
    fmt.Println("I don't do anything")
}

func TestSomeStruct(t *testing.T) {
    s := SomeStruct{w: &MockedWrapper{}}
    s.SomeMethod()
    // assert here
}

Of course the actual mocking would be a bit different using some form of mocking utility.
The problem would be the func (w *ImplementedWrapper) DoThingsYouDontWantInTests() cannot be tested, leaving this code uncovered. I hope this somehow makes clear what i meant.

@connerj70 Do you mean I could write tests, which always will be skipped but count to coverage?
@ArjunDhar interesting read! Do you mind explaining the details on how you achieve compile-time import? Do you use build tags for this?

In the meantime I found out that go test -cover does not count coverage on packages not including any test files. So having tested a bunch of packages and leaving one package (containing only wrapper functions like func (w *ImplementedWrapper) DoThingsYouDontWantInTests()) without test files would not negatively impact code coverage. Is this actually the way to do it?

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.