The problem with this approach is when they say "okay, we'll have an intern do the manual keying each week and use the prototype as the finished product. Thank you, goodbye."
Or the fact that it looks 90% done, so the idea that you did it in 2 weeks means you are just days away from completion, right?
Sometimes the reasons for not hacking stuff together can be quite political, both for internal projects and for consultant/client relationships.
"okay, we'll have an intern do the manual keying each week and use the prototype as the finished product. Thank you, goodbye." is a valid business choice though. Maybe the cost of the intern and the risk of being unable to effectively maintain or add features is worth it to the client. It's their choice if they would prefer it that way.
Not only that, but the intern can make decisions that an automated system isn't going to be able to do safely due to ambiguities, etc.
This reminds me of an automated licensing call I was putting together once. The company owner wanted me to insert some verbiage into the notes field, only the web api didn't support the notes field (but you could do it by logging in and manually updating).
I had started to plan on how I was going to do this via a headless browser. I start asking the owner questions to clarify things and he says "just email XXX with the information and she'll manually log into the site and copy/paste the notes you put in the email".
And I thought... you know what? that's really fucking smart, I'm way overthinking this. They're going to get 4 or 5 license activations/month MAYBE and this approach is a whole hell of a lot more simple and robust than pulling in a headless browser to simulate it.
I then had to stop and think about why I hadn't considered that approach before.
I guess the point is that I agree with you wholeheartedly. Not only in terms of simply cost, but in terms of complexity, stability, and ambiguity as well.
There is a bigger story about the project for "YayHappyFunTimesCorp" which included that difficult conversation - how to, instead of carrying on with designing new features quickly, slow down and make sure that everything that we hacked together is scalable. It's definitely not an easy one.
This is often good advice especially in a startup, but it's not quite absolute. It's worth putting a certain amount of work into maintainability. It's worth pre-empting some classes of issues. And the further you shift into being a mature business, the more serious engineering becomes appropriate.
Yes it's bit of a trade-off, as usual in software. Hacking can be the fastest now, but having worked on 10000loc code which consisted mainly of such hacks and none of the feasible maintainability: your hack might cause a ton of lost time in the future.
Then again, the more experienced you get, the easier it will be to spot if a hack is the appropriate thing or not. Like if your software is already nicely established and loosely coupled you can apply hacks here and there without any negative effect on maintainability or functionality whatsoever.
Mainly C++ dev here, I do think for example LLVM is very well done, and Boost as well. But both are immense codebases and especially Boost is complicated to the point it's insane. It's a shame but I don't have good examples of relatively small projects to check out. Maybe that's worth an 'Ask HN' thread: I've definitely seen threads like 'Ask HN: what are good open source projects to check' but without the extra requirement they're small(ish). The concept of decoupling is usually not just in the details, but in the more higher level layers of the software, so to really grasp it you'd have to really spend some time on the whole project, not just read some files here and there. Which would obviously be easier with a small project.
I would point to both CLang and KDE as projects with very good modularity. I remember reading through the sources oc CLang a few years back and being impressed with it.
I always think this is very much the approach MVP (Minimum Viable Product) takes. I've seen this taken to the extreme in a company where the IT department where applying MVP to their internal business users creating absolute havock. Eventually Business Analysts were brought in to 'force' a more beneficial approach to delivering good solutions as well as get the business talking to IT in terms that each could understand.
There is a point within a company, where you have to switch from Minimum Viable Product to Minimum Valuable Product and try and live by the motto "Write your way out of a job".
Not sure where I am going with this, but I think the OP is right. We overthink things sometimes.
I encountered this in a personal project in the past week.
The ideal, normalized schema would be slow and awkward for manual insertions on a daily basis, which I felt would deter me and hence undermine the project.
The not-quite-normalized version would be much quicker and more intuitive for me to keep up-to-date, so I talked myself into it.
In the future I can always write a stored proc to process the less-normalized data into idealized tables, and use the original tables solely for importing the data.
Despite that reassurance it was remarkably difficult to accept that doing the 'wrong thing' programmatically was the 'right thing' in a project scope.
I took the exact opposite approach in a personal project. In my professional experience I have seen this exact same pattern with stored procedures use to hide poorly designed tables. After time, more entropy occurs in the system. Being able to change things comes to a crawl as you have duplicate data due to the initial poor schema design.
Many of the current databases like Postgresql let you create views. And the performance even on a smaller machine is quite good. So if you stick to a normalized schema and build views, it will pay dividends later on.
I think outside perspectives from other disciplines can help here.
I garden, and before that I tried and failed at bonsai for quite some time. With landscaping it's good to have a five year plan, because everything keeps changing when you're not looking, and if you tried to do everything you wanted all at once you'd hurt yourself and/or kill the plant(s).
I also played Go for a long while, and I found parallels with the philosophies in refactoring. One of the big things in Go is that there are always fifteen things you could be doing but some have contingencies, and among the others you can't win unless you prioritize these options better than the other guy. You know what moves you will make, and when the time is right you will make the move, and the next and the next almost without thinking, because there are patterns just like in software. But right now this other move has more upside so you are doing that instead.
Depending on your database (I'm assuming some manner of SQL) you could create a view that is denormalized and supports SELECT/INSERT/UPDATE/DELETE for human query UX.
I think this needs to be said to a lot of stubborn egotistical designers too.
Ego in some ways is good but needs to be controlled and put aside so projects get the most effective outcome.
I've seen designers simply copy and paste from an "inspirational" pinterest board, strategists copy and paste the first idea from an article or speech or book - hacking stuff together is for more than simply technical people.
Being able to redefine the problem to get at a "quick and dirty" solution is absolutely a useful skill. What I think gets glossed over in the fine article and in some of the other comments, is that this is really only a good approach when one can also see the full solution as well and understands how to evolve past the "mechanical finger" if it becomes necessary to refine the system.
Where people get into trouble is when they have a short problem space horizon and don't have a decent feel for the trade-offs being made and a reasonable understanding of how the "wrong solution" is related to the "right solution". That can lead to the creation of a hack-y, fragile and unchangeable system that can really limit progress in the future.
If you know you're cutting corners, the ability to change in the future should be the priority. It usually means that you need to consider how the "perfect" solution would look like, so you know where the possible refinements will come to play.
Perhaps I am misunderstanding the discussion and do not see the "wrong" way: it seems that this consistent, incremental improvement is the "right way" to get to the the "right result". Isn't that very much a tenet of a whatever one may want to call a "lean start-up"
"""
We started with a completely unscalable solution, which enabled us to validate the need. We then evolved it, step by step, to make it support more and more users. On the way, we learned not only about our users, but also discovered what the technology requirements were. We can only speculate what the outcome of the project would have been if we hadn’t let ourselves find a “quick and dirty” solution.
I can't. My first job out of college was extremely toxic. It was the other developers' job to completely trash anyone else's code. It was sport to them. Now, even several companies later, I have anxiety going into a code review.
This idea is basically all about the future. If you don't have to worry about the future whatsoever, or have no business worrying about it (e.g. you're in a startup that might die in 6 months), by all means hack it together, get as far as you can, and "borrow" as much technical debt as you can rack up.
The longer the future of the project, the more likely that tech debt will have to be repaid. And the more closely you're involved with it, the more likely it's you who'll be paying it.
It may not be absolute advice, but certainly applicable in many companies.
For instance getting a new environment setup inside a company that is a big Azure customer. There are forms, reviews, cost centers, and more reviews. Even to get a QA environment. I have access to a subscription, and more than once I've just set things up for people.
We're talking potentially weeks of time lost just to start working.
Not to mention the seemingly severe lack of willingness to prototype within an organization these days.
The coolest thing about ignoring trends is that most of them go away again.
I pity the fool who did test first development on applications with less than 10 functionalities for instance.
Even as a manager it's the gaffatape programmers who end up saving the day rather than the best practices.
Obviously I won't advocate against following best practices. It's just that people never seem to agree on what they are, making continuity rather hard to pull off over longer periods of time.
Having worked with delivering software on very short release cycles, my experience is that gaffa tape solutions often are the reason you need to "save the day" in the first place, where a well thought out solution could have saved us the headache.
The gaffa tape approach encourages further such poorly thought out modifications when the stacks of gaffa become a death trap that you either have to Indiana Jones your way through with a gun, bags of sand and a bullwhip or tear down and rewrite.
It's really hard to get gaffertape version 3.0 out the door. Those best practices are ultimately about reducing wear and tear on the development team, so you can keep momentum for a long long time.
But since you can make pretty much any development strategy work for about 18 months, by the time the consequences are felt either those people are gone or nobody still there can paint a clear cause and effect story.
Which is the text book explanation, but you can gaffatape and still do SOLID, making everything easily replaceable later on.
The key feature of gaffatape programming is that it gets shit done, to make sure you're still around to do a 3.0 release.
The key downside of gaffatape is that it requires better talent, because your programmers need to know how to hack it in a way that won't ruin your codebase.
I allcate more resources into fixing people who followed best practices wrongly, than I do into fixing what my gaffatape programmers made.
The thing about gaffatape programmers is that they learn how to do things that won't end up biting us in the ass later because they are thinking about what, why, and, how they are doing things by default, rather than following orders. Well some of them learn, the rest have short careers.
I mean, as I said, I don't advocate against best practices. If you want to gaffatapetape, you need to know best practices, because you need to understand why you are not following them.
Or the fact that it looks 90% done, so the idea that you did it in 2 weeks means you are just days away from completion, right?
Sometimes the reasons for not hacking stuff together can be quite political, both for internal projects and for consultant/client relationships.