[-] thtroyer@programming.dev 4 points 7 months ago

Having used PHP and Java extensively in my career, it's always entertaining to read what people think about these languages.

[-] thtroyer@programming.dev 19 points 7 months ago

Based on some places I used to work, upper management seemed convinced that the "idea" stage was the hardest and most important part of any project, and that the easy part is planning, gathering requirements, building, testing, changing, and maintaining custom business applications for needlessly complex and ever changing requirements.

[-] thtroyer@programming.dev 3 points 7 months ago

Absolutely.

I've seen so many projects hindered by bad decisions around performance. Big things like shoehorning yourself into an architecture, language, or particular tool, but even small things like assuming the naive approach is unacceptably slow. If you never actually measure anything though, your assumptions are just assumptions.

[-] thtroyer@programming.dev 3 points 8 months ago

Best decision I made was taking an internship. I wasn't really looking for one, but through some connections, one basically fell in my lap. It was in old tech I messed with in high school, so I was reluctant, but getting real world programming experience was fantastic. The team was great and I helped solve some interesting problems on a small project of theirs. They kept me on as long as they could (>1 year). I think people can be way to idealistic, especially when starting out. Go get a year or two somewhere, anywhere. You'll have a ton more marketability and control over where you end up with experience and professional references.

Biggest career regret was waiting around afterwards for a time to try to get hired on at that same place. Not a ton of programming jobs locally and I wanted to continue my work there, but the company went through semi-frequent growth/shrink phases, and my team wasn't able to get me hired in, though they did try for a while. There were plenty of other good things happening in my life during the down-time after this job and before the next, so it's not really something I regret, but I definitely won't wait on a company like that again.

[-] thtroyer@programming.dev 47 points 10 months ago

Bill is a liability.

[-] thtroyer@programming.dev 4 points 10 months ago

Project Panama is aimed at improving the integration with native code. Not sure when it will be "done", but changes are coming.

[-] thtroyer@programming.dev 4 points 11 months ago

Nice video about it here : https://youtu.be/cZLed1krEEQ

Tldw: US DOS version actually has 2 separate impossible jumps on a level that aren't present on the European DOS or NES versions.

[-] thtroyer@programming.dev 5 points 11 months ago

Yep, absolutely.

In another project, I had some throwaway code, where I used a naive approach that was easy to understand/validate. I assumed I would need to replace it once we made sure it was right because it would be too slow.

Turns out it wasn't a bottleneck at all. It was my first time using Java streams with relatively large volumes of data (~10k items) and it turned out they were damn fast in this case. I probably could have optimized it to be faster, but for their simplicity and speed, I ended up using them everywhere in that project.

[-] thtroyer@programming.dev 10 points 11 months ago* (last edited 11 months ago)

I've got so many more stories about bad optimizations. I guess I'll pick one of those.

There was an infamous (and critical) internal application somewhere I used to work. It took in a ton of data, putting it in the database, and then running a ton of updates to populate various fields and states. It was something like,

  • Put all data in x table with batch y.
  • Update rows in batch y with condition a, set as type a. (just using letters as placeholders for real states)
  • Update rows in batch y that haven't been updated and have condition b, set as type b.
  • Update rows in batch y that haven't been updated and have condition c, set as type c.
  • Update rows in batch y that have condition b and c and condition d, set as type d.
  • (Repeat many, many times)

It was an unreadable mess. Trying to debug it was awful. Business rules encoded as a chain of sql updates are incredibly hard to reason about. Like, how did this row end up with that data??

Me and a coworker eventually inherited the mess. Once we deciphered exactly what the rules were and realized they weren't actually that complicated, we changed the architecture to:

  • Pull data row by row (instead of immediately into a database)
  • Hydrate the data into a model
  • Set up and work with the model based on the business rules we painstakingly reverse engineered (i.e. this row is type b because conditions x,y,z)
  • Insert models to database in batches

I don't remember the exact performance impact, but it wasn't markedly faster or slower than the previous "fast" SQL-based approach. We found and fixed numerous bugs, and when new issues came up, issues could be fixed in hours rather than days/weeks.

A few words of caution: Don't assume that building things with a certain tech or architecture will absolutely be "too slow". Always favor building things in a way that can be understood. Jumping to the wrong tool "because it's fast" is a terrible idea.

Edit: fixed formatting on Sync

[-] thtroyer@programming.dev 10 points 11 months ago

This is a very strange article to me.

Do some tasks run slower today than they did in the past? Sure. Are there some that run slower without a good reason? Sure.

But the whole article just kind of complains. It never acknowledges that many things are better than they used to be. It also just glosses over the complexities and tradeoffs people have to make in the real world.

Like this:

Windows 10 takes 30 minutes to update. What could it possibly be doing for that long? That much time is enough to fully format my SSD drive, download a fresh build and install it like 5 times in a row.

I don't know what exactly is involved in Windows updates, but it's likely 1) a lot of data unpacking, 2) a lot of file patching, and 3) done in a way that hopefully won't bork your system if something goes wrong.

Sure, reinstalling is probably faster, but it's also simpler. If your doctor told you, "The cancer is likely curable. Here's the best regimen to get you there over the next year", it would be insane to say, "A YEAR!? I COULD MAKE A WHOLE NEW HUMAN IN A YEAR!" But I feel like the article is doing exactly that, over and over.

[-] thtroyer@programming.dev 7 points 11 months ago

I'm reluctant to call much "bloat", because even if I don't use something doesn't mean it isn't useful, to other people or future me.

I used to code in vim (plus all sorts of plugins), starting in college where IDEs weren't particularly encouraged or necessary for small projects. I continued to use this setup professionally because it worked well enough and every IDE I tried for the main language I was using wasn't great.

However, I eventually found IDEs that worked for the language(s) I needed and I don't have any interest in going back to a minimalistic (vim or otherwise) setup again. It's not that the IDE does things that can't be done with vim generally, but having a tool that understands the code, environment, and provides useful tooling is invaluable to me. I find being able to do things with some automation (like renaming or refactoring) is so much safer, faster, and enjoyable than doing it by hand.

Features I look for/use most often:

  • Go to (both definition and usages)
  • Refactor tooling (renaming, inlining, extracting, etc).
  • Good warnings, along with suggested solutions. Being able to apply solution is a plus.
  • Framework integrations
  • User-friendly debugger. Ability to pause, drill in, and interact with data is immensely helpful with the type of applications I work on.
  • Configurable breakpoints.
  • Build tool integrations. Doing it on the console is... fine... but being able to set up what I need easily in the IDE is preferable.

Features I don't use or care so much about? Is there much left?

  • My IDE can hook up to a database. I've tried it a few times, but it never seemed particularly useful for the apps I work on.
  • git integration. I have a separate set of tools I normally work with. The ones in my IDE are fine, but I usually use others.
  • Profiler. I use it on occasion, but only a few times a year probably.

I do code in non-IDE environments from time to time, but this is almost always because of a lack of tooling than anything else. (Like PICO-8 development)

[-] thtroyer@programming.dev 2 points 1 year ago

Which they could have done a much better job with.

It was basically just hosted SVN if I remember right, and they never added git support when it became the de facto version control system.

view more: next ›

thtroyer

joined 1 year ago