65

Referring more to smaller places like my own - few hundred employees with ~20 person IT team (~10 developers).

I read enough about testing that it seems industry standard. But whenever I talk to coworkers and my EM, it's generally, "That would be nice, but it's not practical for our size and the business would allow us to slow down for that." We have ~5 manual testers, so things aren't considered "untested", but issues still frequently slip through. It's insurance software so at least bugs aren't killing people, but our quality still freaks me out a bit.

I try to write automated tests for my own code, since it seems valuable, but I avoid it whenever it's not straightforward. I've read books on testing, but they generally feel like either toy examples or far more effort than my company would be willing to spend. Over time I'm wondering if I'm just overly idealistic, and automated testing is more of a FAANG / bigger company thing.

top 50 comments
sorted by: hot top controversial new old
[-] cbarrick@lemmy.world 42 points 7 months ago

Wow 😲

It's not that hard to setup GitHub or GitLab to make sure all the unit tests run for each PR.

If you use something else for version control, check if they offer a similar CI feature. If not, setup Jenkins.

I'm an SRE at a big tech company, so part of my job is to make sure CI infrastructure is readily available to our Dev partners. But I've worked at smaller companies before (10 or less SWEs) and even they had a Jenkins instance.

This is a bright red flag to me. If I worked for a company that didn't have CI, the first thing I would do is set it up. If I wasn't allowed to take the time required to do that, I would quit...

[-] yournameplease@programming.dev 10 points 7 months ago

We do have CI (Azure DevOps), we aren't that insane. Though to be fair, it's relatively recent. The legacy app has a build pipeline but no tests. We got automated deployments to lower environments set up about a year back.

My main project has build pipelines as well, Spring Boot "microservices" (probably a red flag given our size and infrastructure) with code coverage around 40-60% mostly unit tests. But I'm the only dev that really writes tests these days. No deployment pipelines there though as the SysAdmin is against it (and only really let us do the legacy app reluctantly).

[-] cbarrick@lemmy.world 15 points 7 months ago* (last edited 7 months ago)

Ok. So if you have the infra already, it's really just a matter of actually writing the tests. That can be done incrementally.

40%-60% unit test coverage is honestly not too bad. But if the company's bottom line rests on this code, you probably want to get that up. 100% though isn't really worth it for application code, but it is definitely worth it for library code.

One thing where I work is that all commits must be reviewed before being merged. A great way to improve coverage is to be that guy when people send you PRs.

[-] yournameplease@programming.dev 6 points 7 months ago

Ehh to be fair, none of the code with coverage is in use by anyone. It's a constantly delayed project that I kind of doubt will last more than a few months in production if it ever gets there. The primary app has no tests, and the structure probably would require dedicated effort to make testable. Most logic goes through this sort of "god object" that couples huge models very tightly with the database. It's probably something that can be worked around in a week or so, but I never spend much time on that project.

I'm not sure if I want to be that guy though, slowing everyone down when scrum master and managers are already constantly complaining about everything going over estimates. (Even if poor testing is part of the problem...) I could maybe get a couple devs to buy in on requiring tests on new code, definitely not QA or my EM though. Last time tried to grandstand over testing, I got "XYZ needs this ready now, I'll create a story for next sprint to write tests." ... 4+ sprints ago, and still sitting there. I just don't really know how to advocate for this without looking like an annoying asshole, after trying for so long.

[-] ericjmorey@programming.dev 5 points 7 months ago

scrum master and managers are already constantly complaining about everything going over estimates

This is a bigger problem than tests.

I just don’t really know how to advocate for this without looking like an annoying asshole, after trying for so long.

You're presenting a solution for a problem that the team either does not see as important or doesn't think exists at all.

You need to demonstrate the value the solution can bring to them on their terms.

load more comments (4 replies)
[-] PumpkinEscobar@lemmy.world 20 points 7 months ago

Automated testing is often more cost effective than manual testing. Not to say 100% automated testing is a reasonable goal. But I’ve never worked anywhere without some automated testing (unit, integration or end-to-end).

[-] BehindTheBarrier@programming.dev 18 points 7 months ago* (last edited 7 months ago)

I'm on a similarly sized team, and we have put more effort into automated testing lately. We got an experienced person in on the team that knows his shit, and is engaged in improving our testing. But it's definiely worth it. Manual testing tests the code now, automated testing checks the code later. That's very important, because when 5 people test things, they aren't going to test everything every time as well as all the new stuff. It's too boring.

So yes, you really REALLY should have automated testing. If you have 20 people, I'd guess you're developing on something that is too large for a single person to have in-depth knowldge of all parts.

Any team should have automated test. More specifically, you should have/write tests that test "business functionality", not that your function does exactly what it is supposed to do. Our test expert made a test for something that said "ThisCompentsDisplayValueShouldBeZeroWhenUndefined" (Here component is something the users see, and always exepct to have a value. There is other components that might not show a value).

And when I had to interact with the data processing because another "component" did not show zero in an edge case. I fixed the edge case, but I also broke the test for that other component. Now it was very clear to me that I also broke something that worked. A manual tester would maybe have noticed, but these were seperate components, and they might still see 0 on the thing that broke becase they had the value 0. Or simply did not know that was a requirement!

We just recently started enforcing unit tests to be green to merge features. It brings a lot more comfort, especially since you can put more trusting changing systems that deal with caluclations, when you know tests check that results are unchanged.

[-] yournameplease@programming.dev 3 points 7 months ago

Was there any event that prompted more investment into testing? I feel like something catastrophic would need to happen before anyone would consider serious testing investment. In the past (before I joined) there were apparently people who tried to get Selenium suites but nothing ever stuck.

I think nobody sees value in improving something that is more or less "good enough" for so long. In our legacy software, most major development is copy+paste and change things, which I guess reduces the chance of regressions (at the cost of making big changes much, much slower). I think we have close to 100 4k line java files copied from the same original, plus another 20-30 scripts and configs for each...

We are doing a "microservices rewrite" that interfaces with the legacy app (which feels like a death march project by now), and I think it inherited much of the testing difficulties of the old system, in part due to my inexperience when we started. Less code duplication, but now lots of enormous JSONs being thrown all over the network.

I agree that manual testing is not enough, but I can't seem to get much agreement. I think I do get value when I write unit tests, but I feel like I can't point to concrete value because there's not an obvious metric I'm gaining. I like that when I test code, I know that nobody will revert or break that area (unless they remove the tests, I suppose), but our coverage is low enough that I don't trust them to mean the system actually works.

[-] BehindTheBarrier@programming.dev 6 points 7 months ago* (last edited 7 months ago)

Our main motivator was, and is, that manual testing is very time consuming and uninteresting for devs. Spending upwards of a week before a release because the teams has to setup, pick and perform all featue tests again on the release candidate, is both time and money. And we saw things slip through now and then.

Our application is time critical, legacy code of about 30 years, spread between C# and database code, running in different variations with different requirements. So a single page may show differently depending on where it's running. Changing one thing can often affect others, so for us it is sometimes very tiresome to verify even the smalles changes since they may affect different variants. Since there is no automated tests, especially for GUI (which we also do not unit test much, because that is complicated and prone to breaking), we have to not only test changes, but often check for regression by comparing by hand to the old version.

We have a complicated system with a few intergrations, setting up all test scenarios not only takes time during testing, but also time for the dev to prepare the instructions for. And I mentioned calculations, going through all motions to verify that a calculated result is the same between two version is a awfully boring experience when that is exaclty something automated tests can just completely take over for you.

As our application is projected to grow, so does all of this manual testing required for a single change. So putting all that effort into manual testing and preparation can intsead often just be put on making tests that check requirements. And once our coverage is good enough, we can only manuall test interfaces, and leave a lot of the complicated edge cases and calculcation tests to automated tests. It's a bit idealistic to say automated tests can do everything, but they can certainly remove the most boring parts.

load more comments (1 replies)
[-] HubertManne@kbin.social 16 points 7 months ago

automate everything is the standard practice. You can't get a pull request in my company without automated code review including unit tests and selenium style practical tests plus two human reviewers.

[-] mozz@mbin.grits.dev 14 points 7 months ago

I've never worked (recently) at a shop that didn't do some level of automated testing. In terms of having a bunch of people working on a big codebase without stuff being randomly broken most of the time, I'd say it's an absolute requirement to do it to at least some passable level.

In my experience it's, if anything, sometimes the opposite way -- like they insist on having testing even when the value of it the way it's being implemented is a little debatable. But yes I think it's important enough in terms of keeping things productive and detecting when something is totally-broken that you need to.

(Especially now when you can literally just paste a module into GPT and ask it to generate some sorta-stupid-but-maybe-good-enough test cases for it and with minimal tweaking you can get the whole thing in in like 10 minutes.)

load more comments (1 replies)
[-] apotheotic@beehaw.org 14 points 7 months ago

My team follows test driven development, so I write a test before writing the feature that the test, well, tests.

This leads to cleaner code in general because it tends to be the case that easy to test code is also easy to read.

On top of this fact, the test suite acts as a sort of "contract" for the code behaviour. If I tweak the code and a test no longer works, then my code is doing something fundamentally different. This "contract" ensures that changes to one codebase aren't going to break downstream applications, and makes us very aware of when we are making breaking changes so we can inform downstream teams.

Writing tests and having them running at PR time(or, before its deployed to production, if you're not using some sort of VCS and CI/CD) should absolutely be a part of your dev cycle. Its better for everyone involved!

[-] hollyberries@programming.dev 5 points 7 months ago

Doesn't this rely purely on the fact that the test is right?

[-] OhNoMoreLemmy@lemmy.ml 7 points 7 months ago

Yeah, debugging tests is an important part of test driven development.

You also have to be careful. Some tests are for me to debug my code and aren't part of the 'contract'.

But on the other hand, it's really nice. If I spend a couple of hours debugging actual code and come out of the process with internal tests, the next time it breaks, the new tests make it much easier to identify what broke. Previously, that would have been almost wasted effort, you fix it and just hope it never breaks again.

[-] apotheotic@beehaw.org 5 points 7 months ago

Yeah, but it isn't usually very difficult to write a test correctly, unit tests especially.

If you can't write a test to validate the behaviour that you know your application needs to exhibit, then you probably can't write the code to create that behaviour in the first place. Or, in a less binary sense, if you would write a test which isn't "right", you're probably just as likely to have written code that isn't "right".

At least in the case with the test, you write the test and the code, and when the test fails (or, doesn't fail when it should) you're tipped off to something being funky.

I'm sure you could end up writing a test that's bad in just the right way to end up doing more harm than good, but I do think that's the exception(heh).

[-] hollyberries@programming.dev 3 points 7 months ago

I’m sure you could end up writing a test that’s bad in just the right way to end up doing more harm than good, but I do think that’s the exception(heh).

That's exactly why I've asked. That is where I've gone wrong with TDD in the past, especially where any sort of math is involved due to being absolutely horrible at it (and I do game dev these days!). I can problem solve and write the code, I just can't manually proof the math without external help and have spent countless hours looking for where my issue was due to being 100% certain that the formula or algorithm was correct >.<

Nowadays anytime numbers are involved I write the tests after doing manual tests multiple times and getting the expected response, and/or having an LLM check the work and make suggestions. That in itself introduces more issues sometimes since that can also be wrong. Probably should have paid attention in school all those years ago lol

[-] yournameplease@programming.dev 3 points 7 months ago

Game dev seems like a place where testing is a bit less common due to need for fast iterations and prototyping, not to say it isn't valuable.

I've seen a good talk (I think GDC?) on how the Talos Principle devs developed a tool to replay inputs for acceptance testing. I can't seem to find the talk, but here is a demo of the tool.

The Factorio devs also have some testing discussions in their blog somewhere.

load more comments (1 replies)
[-] apotheotic@beehaw.org 3 points 7 months ago

Aw man, I can empathise. I don't personally have any issues with mathsy stuff but I can imagine it being a huge brick wall at times, especially in game dev. I wish I had advice for that but its not a problem I've had to solve!

[-] yournameplease@programming.dev 3 points 7 months ago

We've definitely written lots of tests that felt like net negative, and I think that's part of what burned some devs out on testing. When I joined, the few tests we had were "read a huge JSON file, run it through everything, and assert seemingly random numbers match." Not random, but the logic was so complex that the only sane way to update the tests when code changed was to rerun and copy the new output. (I suppose this is pretty similar to approval testing, which I do find useful for code areas that shouldn't change frequently.)

Similar issue with integration tests mocking huge network requests. Either you assert the request body matches an expected one, and need to update that whenever the signature changes (fairly common). Or you ignore the body, but that feels much less useful of a test.

I agree unit tests are hard to mess up, which is why I mostly gravitate to them. And TDD is fun when I actually do it properly.

[-] apotheotic@beehaw.org 2 points 7 months ago

I hear you. When you're trying to write one big test that verifies the whole code flow or whatever, it can be HELL, especially if the code has been written in a way that makes it difficult to write a robust test.

God, big mocks are the WORST. It might not be applicable in your case, but I far prefer doing some setup and teardown so that I'm actually making the network request, against some test endpoint that I set up in the setup stage. That way you know the issues aren't cropping up due to some mocking nonsense going wrong.

Asserting that some arbitrary numbers match can be quite fragile, as I'm sure you've experienced. But if the code itsself had been written in such a way that you had an easier assertion to make, well, winner!

Its all easier said than done, of course, but your colleagues having given up on testing because they're bad at it is kinda disheartening I bet. How are you gonna get good at it if you don't do it! :D

load more comments (2 replies)
[-] Ephera@lemmy.ml 2 points 7 months ago* (last edited 7 months ago)

You should think of an automated test as a specification. If you've got the wrong requirements or simply make a mistake while formulating it, then yeah, it can't protect you from that.
But you'd likely make a similar or worse mistake, if you implemented the production code without a specification.

The advantage of automated tests compared to a specification document, is that you get continuous checks that your production code matches its specification. So, at least it can't be wrong in that sense.

load more comments (2 replies)
[-] FizzyOrange@programming.dev 14 points 7 months ago

Very common. Your coworkers are either idiots, or more likely they're just being lazy, can't be bothered to set it up and are coming up with excuses.

The one exception I will allow is for GUI programs. It's extremely difficult to do automatically tests for them, and in my experience it's such a pain that manual testing is often less annoying. For example VSCode has no automatic UI tests as far as I know.

That will probably change once AI-based GUI testing becomes common but it isn't yet.

For anything else, you should 100% have automated tests running in CI and if you don't you are doing it wrong.

[-] yournameplease@programming.dev 5 points 7 months ago

Leadership may be idiots, but devs are mostly just burnt out and recognized that quality isn't a very high priority and know not to take too much pride in the product. I think it's my own problem that I have a hard time separating my pride from my work.

Thanks for the response. It's good to know that my experience here isn't super common.

load more comments (1 replies)
[-] Lodra@programming.dev 13 points 7 months ago

Here’s my random collection of thoughts on the subject.

I have no idea how common it is in general. Seems like some devs build tests while others don’t. This varies plenty on a team level as well as organization wide. I’ve observed this at small to very large companies, though not FAANG where I generally hope and expect that tests are a stronger standard.

I will say that test are consistently and heavily used in every large, open source project that I’ve reviewed. At some point, I think quality test cases become a requirement.

Here’s the big thing. Building automated tests is almost always a wise investment, regardless of the size of the org. Manual testing is dramatically more expensive and less effective than running unit and integration tests. I’ve never written unit tests and not found issues.

More importantly, writing unit tests forces you to write code that can be tested. This is important. IMO, code that can be tested is 1) structured differently and 2) almost always better.

Unit tests protect you from your own mistakes. Frequently. Integration tests protect you from other people. E.g when your code depends on an api and that api unexpectedly introduces a breaking change.

Everybody likes having quality tests. Nobody likes writing tests.

Quality tests are basically a strict requirement for fully automating ci/cd to production. Sure, you can skip tests and automate prior deploys, but I certainly don’t recommend it. I would expect people to be fired for doing this.

Chasing 100% test coverage is a fools game. Think about your code, what matters, and what doesn’t. Test the parts that add value and skip the rest. This is highly related to how writing unit tests change your code.

Building front end tests is inherently hard. It’s practically impossible to fully test front end code. Not even close.

Personally, I like the idea of skipping tests when you’re building a POC. Before the POC is done, you may not know if your solution is viable or what needs to be tested. The POC helps you understand. Builds tests for MVP and further iterations.

Quality ci/cd tests are complimented by quality observability, which is a large and independent topic.

/ ramblings of a tired mind

[-] yournameplease@programming.dev 3 points 7 months ago

This is more or less the thoughts I typically hear online, and all makes sense. What I tend to notice interviewing people from big(ger) companies than mine (mostly banks), it sounds like testing for them is mostly about hitting some minimum coverage number on the CI/CD. Probably still has big benefits but it doesn't seem super thoughtful? Or is testing just so important that even testing on autopilot has decent value?

I get that same feeling with frontend testing. Unit testing makes sense to me. Integration testing makes sense but I find it hard to do in the time I have. But frontend testing is very daunting. Now I will only test our data models we keep in the frontend, if I test anything frontend.

[-] Lodra@programming.dev 6 points 7 months ago* (last edited 7 months ago)

Test coverage is useful to measure simply because it’s a metric. You can set standards. You can codify the number into ci/cd. You can observe if the number goes up or down over time. You can argue if these things are valuable but quantifying test coverage just makes it simpler or possible to discuss testing. As people discuss test coverage and building tests becomes normalized, the topic becomes boring. You’ll only get thoughtful discussions on automated testing when somebody establishes a new method, pattern, etc. After that, most tests are very simple. That’s often the point.

Even “testing on autopilot” has high value.

You can build lots of useful front end tests. There are tools for it. But it’s just not possible to test everything because you can’t codify every requirement. E.g. ensure that this ui element is 5 pixels below some other element, except when the window shrinks, and …

I haven’t seen great front end tests. But the ones I’ve seen mostly focus on functionality and flow rather than aiming to cover all possible scenarios. Unit tests are different in this regard.

Integration testing makes sense but I find it hard to do in the time I have.

This is a red flag. Building tests should be a planned part of your work, usually described as acceptance criteria. If you need 4 hours to write a code change, then plan for 8 or whatever so you can build tests. Engineering leaders should encourage this. If they don’t, I would consider that a cultural problem. One that indicates a lack of focus on quality and all of the problems that follow.

Edit: I want to soften my “red flag” comment. That’s a red flag for me. That job isn’t necessary bad. But I would personally not be interested. It’s ok to accept things like, “we don’t write tests and sometimes we deal with issues”. Especially if it’s a good job otherwise.

load more comments (1 replies)
[-] Piatro@programming.dev 10 points 7 months ago

I'm in a team of 4 developers and we demand automated testing. Ok that's part of a slightly bigger development team but even our QC team have automated tests that they run for integration testing.

[-] Badeendje@lemmy.world 9 points 7 months ago

And please for the love of all that is holy.. DO NOT let some smuck go through setting up a test platform with all defaults and cause thousands of notifications per minute without any plan on actually addressing notifications (by either actually fixing the issue or tweaking thresholds for your specific situation.)

Else all the system does is train people to become apathetic to notifications and warnings. I once was on an IT Team that had read notification boards and 104k notifications.. and all they did was joke about it.. seriously.. turn off the system then.

[-] Skelectus@suppo.fi 8 points 7 months ago

I wouldn't accept a job if they didn't do it.

[-] henfredemars@infosec.pub 7 points 7 months ago

We use automated testing, not for full coverage, but smoke tests so we can detect problems more quickly and avoid potential embarrassment.

[-] pkill@programming.dev 7 points 7 months ago* (last edited 7 months ago)

Sometimes you'd use defensive programming (type checker, exception handling, null safeguards, fallback/optional values) which can be argued as a sort of in-place testing, so testing can be not as beneficial to your projects' robustness as the readability of their core business logic. And some languages would lean more heavily towards defensive programming (e.g. Go, Scala or well written Typescript) and some would rely more on tests but also be designed in a way that makes testing really easy as they seek to keep things loosely coupled (Elixir or Clojure).

Also if your language doesn't have a quality REPL to reliably test things manually, there is a relatively high chance you debugging process is causing you to waste more time than having a good test coverage.

[-] FizzyOrange@programming.dev 7 points 7 months ago

I think even in languages that do a lot at compile time (Rust, Haskell, etc.) it's still standard practice to write tests. Maybe not as many tests as e.g. Python or JavaScript or Ruby. But still some.

I work in silicon verification and even where things are fully formally verified we still have some tests. (Generally because the formal verification might have mistakes or omissions, and occasionally there are subtle differences between formal and simulation.)

load more comments (1 replies)
[-] qevlarr@lemmy.world 6 points 7 months ago

I worked at 8 different companies as a contractor, so hopefully my sample size is big enough to be meaningful. I'd say it's 50-50. The companies that don't, usually know that they should but they need a little help. Companies that don't do it and they think they don't need it, are becoming more and more rare (fortunately).

Stick with it. If you're a junior, don't go evangelizing automated testing because it will fall on deaf ears until you're a little more experienced. Keep practicing and offer to set things up if they haven't already.

[-] Theharpyeagle@lemmy.world 6 points 7 months ago* (last edited 7 months ago)

We started focusing in on automated testing when we had 3 manual QAs (not including me), and since then every new project has started with plans for automated testing.

It's important to note that we don't do automated tests instead of manual testing. Manual testing is still important for focused review of new features/bugs, but automated tests make sure code changes aren't breaking anything elsewhere.

Also this is all about end-to-end tests (with Selenium, in our case). If you're talking about a lack of unit/integration tests within the codebase itself, that's a huge red flag. Even if quality issues aren't the end of the world, they will definitely make people reconsider using your product. Who wants to trust their financial information with unstable software? It's also making your QA team less efficient since they're having to chase down issues that would be better recognized by the dev who wrote them.

[-] GissaMittJobb@lemmy.ml 6 points 7 months ago

Automated tests are pretty common, yes. It's not strictly speaking a matter of company size, but moreso company technical maturity.

Automated tests do not slow your business down, it is in fact the only way to not get slowed down as the amount of code you maintain increases.

The alternative cost of not having tests catch issues before they reach production is very significant - an error caught by an automated test costs nothing, while an error that makes it into production can cause immense harm to the business, if only for the time necessary to remediate the issue, which is time that could have been spent on actually making progress on delivering new features.

Not to mention the high cost of having to employ increasing amounts of manual testers just to keep the worst of issues from slipping through.

All in all, not having automated tests in place is a significant mistake from a business perspective. You might want to have a frank discussion with your CTO about it.

[-] Kissaki@programming.dev 6 points 7 months ago* (last edited 7 months ago)

My context: I'm in a small ~30 software company. We do various projects for various customers. We're close to the machine sector, although my team is not. I'm lead in a small 3-person developer team/continuous project.

I write unit tests when I want to verify things. When I'm in somewhat low, algorithmic, coding behavior, interfacing areas.

I would write more and against our interfaces if those were exposed to someone or something. If it needs that stability and verification.

Our tests are mainly manually (mostly user-/UI-/use-interface-centric), and we have data restrictions and automated reporting data consistency validations. (Our project is very data-centric.)

it’s not practical for our size and the business would allow us to slow down for that

Tests are an investment. A slowdown in implementing tests will increase maintainability and stability down the line. Which could already be before delivering (reviews, before merge or before delivery issues being noticed).

It may very well be that they wouldn't even slow you down, because they could lead you to a more thought out implementation and interfacing. Or noticing issues before they hit review, test, or production.

If you have a project that will be maintained then it's not a question of slowing down but of are you willing to pay more (effort, complexity, money, instability, consequential dissatisfaction) down the line for possibly earlier deliverables?

If tests would make sense and you don't implement them then it's technical debt you are incurring. It's not sound development or engineering practice. Which should require a conscious decision about that fact, and awareness on the cost of not adding tests.

How common automated testing is - I don't know. I think many developers will take shortcuts when they can. Many are not thorough in that way. And give in to short-sighted time pressure and fallacy.

[-] yournameplease@programming.dev 2 points 7 months ago

Perhaps it's just part of being somewhere where tech is seen as a cost center? Technical leadership loves to talk big about how we need to invest in our software and make it more scalable for future growth. But when push comes to shove, they simply say yes to nearly every business request, tell us to fix things later, and we end up making things less scalable and harder to test.

It feels terrible and burns me out, but we never seem to seriously suffer for poor quality, so I thought this could be all in my head. I guess I've just been gaslit by my EM into thinking this lack of testing is a common occurrence.

(A programming lemmy may not be a terribly representative sample, but I don't see anyone here anywhere close to as wild west as my place.)

[-] Ephera@lemmy.ml 2 points 7 months ago

It feels terrible and burns me out, but we never seem to seriously suffer for poor quality, so I thought this could be all in my head.

The way you suffer for it, is in a loss of agility.

When I'm in a project with excellent unit test coverage, I often have no qualms with typing up a hot fix, running it through our automated tests and then rolling it out, in less than an hour.
Obviously, if it's a critical target system, you might want to get someone's review anyways, but you don't have to wait multiple days for your manual testers to get around to it.

Another way in which it reduces agility is in terms of moving people between projects.
If all the intended behavior is specified in automated tests, then the intern or someone, who just got added to the project, can go ham on your codebase without much worry that they'll break something.
And if someone needs to be pulled out from your project, then they don't leave a massive hole, where only they knew the intended behavior for certain parts of the code.

Your management wants this, they just don't yet understand why.

[-] yournameplease@programming.dev 1 points 7 months ago

We ~~have~~used to have a scrum master so we're already agile! /s

They want those things, sure, but I think it would take multiple weeks of dedicated work for me to set up tests on our primary system that would cover much of anything. Big investment that might enable faster future development is what I find hard to sell. I am already seen as the "automated testing guy" on my (separate) project, and it doesn't really look like I'm that much faster than anyone else.

What I've been meaning to do is start underloading my own sprint items by a day or two and try to set up some test infrastructure in my spare Fridays to show some practical use. But boy is that a hard thing to actually hold myself to.

[-] Ephera@lemmy.ml 2 points 7 months ago

If we end up in a project with too little test coverage, our strategy is usually to then formulate unit tests before touching old code.

So, first you figure out what the hell that old code does, then you formulate a unit test until it's green, then you make a commit. And then you tweak your unit test to include the new requirements and make the production code match it (i.e. make the unit test green again).

I am already seen as the "automated testing guy" on my (separate) project, and it doesn't really look like I'm that much faster than anyone else.

This isn't about you being faster, as you write a feature. I mean, it often does help, even during first implementation, because you can iterate much quicker than starting up the whole application. But especially for small applications, it will slow you down when you first write a feature.

Who's sped up by your automated tests are your team members and you-in-three-months.
You should definitely push for automated tests, but you need to make it clear that this needs to be a team effort for it to succeed. You're doing it as a service to everyone else.

If it's only you who's writing automated tests, then that doesn't diminish the value of your automated tests, but it will make it look like you're slower at actually completing a feature, and it will make everyone else look faster at tweaking the features you implemented. You want your management to understand that and be on board with it, so that they don't rate you badly.

[-] yournameplease@programming.dev 2 points 7 months ago

Who’s sped up by your automated tests are your team members and you-in-three-months.

Definitely true. I am very thankful when I fail a test and know I broke something and need to clean up after myself. Also very nice as insurance against our more "chaotic" developer(s).

I've advocated for tests as a team effort. Problem is just that we don't really have any technical leadership, just a hands-off EM and hands-off CTO. Best I get from them is "Yes, you should test your code." ...Doesn't really help when some developers just aren't interested in testing. I am warming another developer on my team up to testing, so at least I may get another developer or two on the testing kick for a bit.

And as for management rating me... I don't really worry too much. As I mentioned, hands off management. Heck, we didn't even get performance reviews last year.

[-] lurch@sh.itjust.works 5 points 7 months ago

yes, it's very common in my region. 50% of companies I worked at had CI servers that ran unit tests round the clock. the companies are only slightly bigger than yours. also i know multiple companies my company worked with also have CI setups.

some even auto deploy to prod when the tests in master passed okay.

most use hudson or jenkins for CI with junit, phpunit, selenium and/or cypress for testing.

[-] 0x0@programming.dev 4 points 7 months ago

I wish. Most companies i've worked at i was maintaining monolithic legacy code that's hard to test properly. Sometimes another team was developing the next best thing under management guidance (so it would become the next monolithic legacy code) but usually no.

I've only worked at one company that did TDD and things were smooth.

As usual, management only sees short-term and it's hard to impress on them that any time lost now by implementing proper testing will be gained in the long run.

[-] yournameplease@programming.dev 2 points 7 months ago

another team was developing the next best thing under management guidance (so it would become the next monolithic legacy code)

Pretty much what my team is doing. No need to spend time improving the old system when this one will replace it so soon, right? (And no, we will not actually replace anything anytime soon.)

[-] MonkderDritte@feddit.de 3 points 7 months ago

Testing? What's that?

[-] lorty@lemmy.ml 3 points 7 months ago

Yes, it's pretty standard, although how valuable it is depends on a lot of factors. You can write a lot of useless tests just to get the expected "coverage". Also management will never see value in that type of work even after things break in production.

load more comments
view more: next ›
this post was submitted on 04 May 2024
65 points (98.5% liked)

Ask Experienced Devs

1232 readers
1 users here now

Icon base by Delapouite under CC BY 3.0 with modifications to add a gradient

founded 2 years ago
MODERATORS