• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

The cost of good Programmers/Developers.

Sergei

Senior Member
Forum Donor
Joined
Nov 20, 2018
Messages
361
Likes
272
Location
Palo Alto, CA, USA
To anyone who has worked in software development: have you ever been involved in a project that took twenty people two years to burn through several million dollars/pounds and ultimately failed, but which you are confident you could have done yourself, on your own, in a couple of months (if not a weekend)? I have been in situations like that, a few times.

No, I haven't. It was illuminating, however, to work for a global company that had 98% success rate on software projects. And yes, it did use a proprietary and rather detailed methodology. But also: it hired, trained, motivated, and promoted people according to meaningful objective metrics, which oftentimes contradicted the "everybody knows it" conventional wisdom in major ways.

Later I worked for another global company, which at the time I joined it had its major software project's budget very significantly overran, despite using a supposedly surefire methodology. However, instead of assigning blame, the company decided to find and hire suitably experienced people, who in a short time fixed the runaway projects issue.

I did estimation for the next big project that came right after the one I mentioned, and my estimate of required budget ended up being within one-digit % of actual. I was lucky to have as a manager an incredibly smart lady, with a Ph.D. in Computer Science to boot. She understood the non-linear and probabilistic aspects of dealing with software complexity, which I had to take into account to get to an accurate enough estimate.
I remember the development of a trivial product for which only about 300 units would be sold per year for a few hundred pounds each, but it cost millions to develop and ultimately failed, all down to the huge overheads of developing it using 'a methodology'.

Well, what you described failed according to the set of objective metrics that you find applicable. For some other people, the metrics could be different. For instance, someone maybe was promoted, to lead the supposedly necessary larger team. Someone maybe got crucial positive exposure to higher management during the project initiation phase, because of its supposedly unavoidable large scope.

If a manager is skilled at "taking the credit, shifting the blame", and a particular corporate environment encourages such behavior, you will see people being promoted, or successfully advancing their careers by switching companies at opportune moments, while the projects they initiated, or at some point were responsible for, have grossly failed according to objective metrics.

Sometimes it can get even more perverted: for instance, a higher-level manager/executive deliberately sets his direct or indirect report up for a failure. The personal benefit for such manager/executive could be either having a potential rival removed, or ensuring that only 100% loyal managers constitute the part of the organizational command and control pyramid under him or her.

Most people are kind and good. Yet some ... more like https://www.amazon.com/Snakes-Suits-Revised-Understanding-Psychopaths-ebook/dp/B071YRWJBP.
 

Soniclife

Major Contributor
Forum Donor
Joined
Apr 13, 2017
Messages
4,510
Likes
5,438
Location
UK
The biggest thing is if using jargon or unusual abbreviations it will suggest something else.
After a while you can usually know before you start some words, that your word isn't in it's dictionary, and then swap to typing the word. The biggest problem with phone typing is editing if you realise you want to change around what your wrote, they are still clunky at that.
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
...my estimate of required budget ended up being within one-digit % of actual...
...what you described failed according to the set of objective metrics that you find applicable.
I maybe work in a different world from this.

A (mainly made-up) example of what I think goes wrong in many an engineering development:

Supposing you're developing a product that has an indicator LED on the front panel. At the design stage this is dealt with by the LED being driven directly by a microcontroller pin, and the assumption is that some code will turn the LED on at times and off at others. It's got to be one of the most trivial aspects of the product.

Already a potential problem has occurred: engineers' sense of aesthetics has been built into the product. And who drove the decision to turn the LED on and off with a microcontroller pin: was it the software engineer, hardware engineer or a higher level 'systems' engineer? Is it definitely the case that this CPU pin is capable of driving the LED, and is it the case that the software can meet the requirements of doing it at the right time? Is it definitely the case that the LED won't do something spurious at power-on, and so on? By splitting this problem up between engineers, maybe no one has a view of the whole problem.

That month's favourite software methodology designed to predict budgets to within 1% commits the method into the project, anyway. A test plan is drawn up.

Development proceeds using commercial evaluation boards and some LEDs lashed up by some technicians. In parallel, the production hardware is manufactured; with assurances from the engineers that no great changes are envisaged from now on.

When the software and production hardware come together, they are found to meet the requirements and everything's fine. Time is tight. The managers give the go-ahead to manufacture the production hardware ASAP.

But... the assessment of everything being fine is based on the simple yes/no criteria of the test plan: does the LED come on when condition X occurs? Unequivocally, yes.

However, when the prototype is presented to the customer, although they acknowledge that it works they are troubled by a few things. The LED indicator flickers as condition X comes and goes, and it's a lot brighter than the power indicator LED. It looks 'clunky'. Although technically the design works as per the original specification, no one is pleased with it.

I have seen very similar things to this. And I have almost never seen a functional specification that stipulates "The indicator LED shall not be perceived as flickering when condition X approaches or recedes" because it's almost impossible to quantify. And no one in the company even understands, or is capable of testing, a specification like "The perceived brightness match between LEDs shall be within 5% on the following scale...". Apple might do that sort of thing, but not this company.

Everyone assumes that 'it will be OK', or that someone else is taking care of that aspect. And no one has ever built a device that indicates condition X, so how could they know it would flicker?

In an emergency brainstorming session, the following solutions to the problems might be suggested:
  1. introduce hysteresis on condition X to prevent flickering
  2. introduce minimum time for on and off state to prevent flickering
  3. change hardware resistor value to prevent brightness mismatch
  4. introduce PWM to prevent brightness mismatch
  5. use PWM to give some proportional brightness to the LED instead of binary on/off
  6. use PWM combined with fixed-rate ramping on and off
  7. any and all permutations of the above
The problem remains: who understands how to do it all? Possibly no one. And if the methodology assumes that engineers are all 'line-replaceable drop-in resource units', maybe no one is even capable of joining the necessary dots to come up with the above list.

And even if the above list is drawn up, there may be major problems:
  • PWM is very difficult because the microcontroller pin used for the LED hasn't got a hardware PWM module available
  • the software architecture really wasn't designed to maintain ramps etc. even if the PWM could be done in hardware
Somehow, someone needs to experiment with the list of options and come up with a satisfactory solution (hysteresis may make the LED seem 'sticky' or unresponsive; PWM below a certain frequency will flicker; minimum on/off time may make it seem unresponsive; proportional brightness may still look unsophisticated; ramping may make the LED seem unresponsive and look lumpy unless done with a logarithmic scale - does anyone in the company know this? - which increases the required number of brightness graduations, etc.) and then the solution needs to be implemented by line-replaceable resource units, so it needs to be formulated in a way that fits the methodology. And not least, a test plan needs to be drawn up for it, and this is quite a difficult task.

Maybe real world examples of condition X are impossible to find in an industrial unit in Slough, so simulations are used (that someone has to create, which takes time and effort). The potential configuration decided upon may still not look good in the field.

Etc. Etc.

I think this is a trivial example of the same factors that might lead to the Boeing problem or the Challenger disaster.

I think that the idea of engineers all being replaceable, interchangeable 'resource units' is a fantasy, and that for small projects, rigid methodologies and testing regimes are the stuff of nightmares that increase costs, result in 'clunkiness' and allow potential disasters to drop through the cracks while all the boxes are ticked correctly. The real world isn't compatible with being forced to conform to such rigid, simplistic schemes.

But at the same time, such schemes may be the only practical way to develop massive projects - with massive costs.

(Who would want to work in such an industry? Not me. It has always mystified me how talented people can stand being a tiny cog with a prescribed function within a massive organisation...).
 

StevenEleven

Addicted to Fun and Learning
Forum Donor
Joined
Dec 1, 2018
Messages
583
Likes
1,192
does anyone in the company know this?

(Who would want to work in such an industry? Not me. It has always mystified me how talented people can stand being a tiny cog with a prescribed function within a massive organisation...).

You guys are way more knowledgeable on this stuff than I am, but I can really relate to some of these experiences having seen them from the end-user perspective in a production environment, or by analogy to what I have been through.

We have had software developed by contractors that simply was not adequate and was the critical software in a production environment. It’s fair to say failure is not an option for us. It became obvious to me at least that we had access to absolutely no one anywhere among any of our resources inside or out who had the ability to fix the problems. So at one point when we were at our wits’ end we had Adobe and Microsoft cooperatively willing to help us on an ad hoc basis. They looked at the code, ran error logs, set up the software as configured to test it, and found so many problems, and fixed it so fast (in one or two weeks), and I was like, why in the world did we ever hire these other guys?

As far as being a cog with a prescribed function, I have been there too. I took a life lesson from something I read once. It was so long ago when I read it, my memory as to the specifics is hazy—but IIRC many extremely talented Ph.D. physicists were charting the cosmos in a huge collective effort for a top U.K. university and for each of them it was hugely taxing and extremely monotonous.

One of them was a woman physicist and she found a mentor and he gave her the big picture and told her she had to get out of this charting the cosmos role because it was a total dead end. So she put her mind to that and I even forget what her accomplishments were but she went on to do great things. (The book was more generally about history and development of and concepts in physics.)

So I said to myself, by analogy, I do not want to be one of many cogs charting the stars, I owe it to myself to find better for myself than that.
 
Last edited:

digicidal

Major Contributor
Joined
Jul 6, 2019
Messages
1,985
Likes
4,844
Location
Sin City, NV
To anyone who has worked in software development: have you ever been involved in a project that took twenty people two years to burn through several million dollars/pounds and ultimately failed, but which you are confident you could have done yourself, on your own, in a couple of months (if not a weekend)? I have been in situations like that, a few times.

Software people love the idea of organising a project a bit like software itself, and it's overkill. They also like to try 'something new' that will look good on their CV, perhaps.

I have literally experienced this at least 5 times in my career (thus far... more to come, I'm sure)!

The other problem is that often large projects (like a new shiny airplane, for instance) become the 'pets' of executives that have little to no experience in anything other than marketing. These executives then adjust - intentionally or accidentally - critical specifications on a whim, often over drinks after hours. This often occurs without any formal adjustment to the published specifications that were passed down the line to the guys 'in the trenches'. In the worst cases, only half of the production team is notified of the change... and the other half is left working from an obsolete spec.

I began my life as a network administrator, then had to take over as a dba and telecom tech when outsourced providers failed us. Finally, I had to pick up programming when the two sets of consulting firms (all paid much more than I was at the time) produced software that failed to function as specified. Two companies later, I've now replaced another set of 4 employees... simply because it was quicker to just do all of their work than it was to manage them.

Luckily I eventually learned not to do this without negotiating an increase equal to most of their salaries first. ;)
 

Soniclife

Major Contributor
Forum Donor
Joined
Apr 13, 2017
Messages
4,510
Likes
5,438
Location
UK
Luckily I eventually learned not to do this without negotiating an increase equal to most of their salaries first.
Well done, sounds like you have worked in similarly well run companies to be. The last time they outsourced us I wangled a 50% rise, largely by not doing anything and looking unconcerned with their machinations.
 

Sergei

Senior Member
Forum Donor
Joined
Nov 20, 2018
Messages
361
Likes
272
Location
Palo Alto, CA, USA
And who drove the decision to turn the LED on and off with a microcontroller pin: was it the software engineer, hardware engineer or a higher level 'systems' engineer?

Optimally, on smaller projects it ought to be systems architect, experienced in hardware, software, and user interactions design. On larger projects, it better be a tightly knit team of people, whose combined experience covers most of the bases, and who are mature and secure enough to ask others for help at times when the core team is out of its depth.
That month's favourite software methodology designed to predict budgets to within 1% commits the method into the project, anyway. A test plan is drawn up.

If a methodology is not battle-tested by this organization, no way it can predict the budget within 1%. In my experience, 5% is a more realistic goal, with a rock-solid methodology, stable core team, and a familiar technology stack.

I had my own "interesting conversations" with managers who would require "to the cent" estimate. I would ask them how long does it usually take them to get from home to the office. They would invariably give me a range estimate.

Then I would say: OK, you've been driving this road for several years, every weekday, and still you can only predict the time it'll take you with 15% precision. Why is that? What factors affect this? Can we now discuss the similarly operating factors that add variability to our project delivery time?
However, when the prototype is presented to the customer, although they acknowledge that it works they are troubled by a few things.

Which should be fine, if the organization understands that maybe only prototype v6 will be good enough to base a mass market product on. Rapid prototyping tools and methodologies are of great help here. Also, properly randomized, representative, and impartial usability testing.
The problem remains: who understands how to do it all? Possibly no one. And if the methodology assumes that engineers are all 'line-replaceable drop-in resource units', maybe no one is even capable of joining the necessary dots to come up with the above list.

Ah, we're coming to the crux. In his followup book, https://www.amazon.com/Design-Essays-Computer-Scientist-ebook/dp/B003DKG5H6, Fred Brooks discusses the role of architect, and the perils of his or her too literal subordination to management hierarchy.

Who were Albert Einstein's managers over the years? Can you find out even if you really try? Humanity doesn't seem to care much about these guys, not matter how good managers they actually were.

Likewise, history remembers Ferdinand Porsche and Alec Issigonis, but not necessarily the people who formally managed them at one point or another. Designers / Architects / Artists / Musicians matter all by themselves!
I think this is a trivial example of the same factors that might lead to the Boeing problem or the Challenger disaster.

Too early to tell about Boeing. The smoke screens of personal accountability avoidance are too thick at the moment. It is not unreasonable to assume that there were engineers and managers who were sounding the alarm, and also that they were overruled by the executives. This is a speculation at this time of course.

With Challenger, it is clearer: the managers and executives who were setting the smoke screens back then finally retired or otherwise departed, and a more truthful picture came out, thankfully in time for some of the wrongfully accused engineers to find peace before they die: https://www.npr.org/sections/thetwo...-engineer-who-warned-of-shuttle-disaster-dies.
I think that the idea of engineers all being replaceable, interchangeable 'resource units' is a fantasy, and that for small projects, rigid methodologies and testing regimes are the stuff of nightmares that increase costs, result in 'clunkiness' and allow potential disasters to drop through the cracks while all the boxes are ticked correctly. The real world isn't compatible with being forced to conform to such rigid, simplistic schemes.

Agreed.
But at the same time, such schemes may be the only practical way to develop massive projects - with massive costs.

The larger the corporation and the more cash-flow-positive it is, the more plausible the 'resource units' approximation appears to be. To the managers, and especially executives, it appears that almost every man in the world is dreaming to work for them, and that they have all the money and stock options they need to entice those who may not inherently want to work to them, but will be - undoubtedly in their minds - be bought for a right price at a right time, if the need arises.

The problem is that too many people who want to work for such corporations aren't interested in, or aren't capable of, creating new things, but are very interested in being paid as much as achievable, for the least amount of work possible, while having their ego constantly stroked by their submissive underlings. In time, such people invade the corporation, or the government agency, and hire more people like them.

Over time, the previously flexible and mostly informal processes that ensured the corporation's past success start struggling to overcome the inertia and "distortions" caused by the people only interested in acquiring money and power. The corporation's "immune system", formerly tuned to identify and push out such freeloaders, is taken over by the freeloaders themselves, and creative people start being pushed out instead.

And then it appears that, in the context of this corporation, the only practical way to develop massive projects is with massive costs, via rigid command-and-control processes. Then, a new corporation, or several, appear. Young, hungry, and agile, they overturn the established conventions, and new corporate cycle begins. Think about it: would Mercedes, or NASA, be able to hire Elon Musk ten years ago, for any money?

To be fair, some very old corporations don't suffer the fate I described. Invariably, they are organized as a looser agglomeration of semi-independent business entities, making their own decisions and bearing responsibility, up to and including dissolution if they fail to produce profits for too long. An oft-cited example is DuPont, which has existed in various configurations since 1802.
 

digicidal

Major Contributor
Joined
Jul 6, 2019
Messages
1,985
Likes
4,844
Location
Sin City, NV
If a methodology is not battle-tested by this organization, no way it can predict the budget within 1%. In my experience, 5% is a more realistic goal, with a rock-solid methodology, stable core team, and a familiar technology stack.

I had my own "interesting conversations" with managers who would require "to the cent" estimate. I would ask them how long does it usually take them to get from home to the office. They would invariably give me a range estimate.

I had a meeting with a CEO once that indicated they required a system with "flexible business logic" to manage all responsibilities of a specific category of employee - and they needed it within 6 weeks. Although mildly irritated at his tone and expectation, I was desperate for revenue so I simply responded that I didn't have enough familiarity with that particular role; but if he could write down what all of those responsibilities were, I would see what I could do.

His frustrated reply: "That would take at least 6 months!!!" At first I said nothing, waiting for him to notice the problem. However, after a very long three minutes... I just gathered my things and left. Ramen tasted pretty good in comparison to getting that contract.
 
Joined
Jul 26, 2019
Messages
7
Likes
4
Location
Canada, Vancouver
Anyone know how to access the nabu developer forums? I tried once or twice and it I just ended up in a loop of trying to access a page tonrequest access.
 

AsiaSin

New Member
Joined
Sep 22, 2020
Messages
1
Likes
0
This is a terrible devaluation of labor... I've read many articles that say the cheapest developers are $25 per hour and the most expensive $180, for example in this article (https://www.cleveroad.com/blog/outsource-web-development) but $9 per hour is so little... Perhaps these developers had additional conditions that are not mentioned...
 

Neddy

Addicted to Fun and Learning
Joined
Mar 22, 2019
Messages
756
Likes
1,031
Location
Wisconsin
During my PM career I did a fair amount of 'outsource vs. hire local' comparisons/evaluations. The above linked site is ok, but misses the really hard stuff entirely: that outsourced dev's MUST be managed very tightly, even micro-managed, if need be. (And, well written specs are only a tiny part of that, too.)

IME, many of the lower cost devs are beginners needing to build their resumes (many of which are terribly inflated), and many of those are still in the 'coding ain't so hard' mindset, not having learned all the really tough things about UI, useability, interoperability and user acceptance - not to even mention how precisely business logic needs to be 'implemented'.

Sure, for 'simple phone apps' not that much of a problem, but for complex government or industry BPR or 'leaning' projects, you really need Experience.

(I found the whole 'we can pop-out Covid tracking software Next Week' thing hilarious - haven't heard much about it since then, have we?, and those who HAVE done (or tried) it, have found that it's so wrapped in subtle complexities that delivery time scales are longer than Covid cures!)

Anyhow, what I found - in addition to very tightly managed expectations (and penalties) - is that if you can establish a good working relationship with the 'overseeing entity' ('vendors' : we called them pimps or madames), and have return business (or better, More) as a lure, you can get good value for much of the 'drudge' work, plus huge 'force multipliers' for tough deadlines. In some rare cases you might even discover great new talent...that part is Fun.
But, like most high demand talent, the best Just Cost More, and you get what you pay for.

Huh. Off stump. Enjoy.
 

Totoro

Member
Joined
Aug 4, 2019
Messages
94
Likes
67
Location
Boston, MA
He is off the mark. He says people over 50 are not needed as developers. That is just false. If you know what you are doing, you can easily get jobs without age being a barrier.

On income, many companies have stock/stock options that significantly increase the amount of wealth you can accumulate. He talks about a nurse. Show me a nurse that earns such benefits. Does not exist. Stock options/stock are the key to going beyond paycheck to paycheck.

Phil has been full of it on many subjects over the years. I remember his company down the street from mine during the first boom that actually advocated Tcl/Oracle as the be all end all web programming platform. Also they would lease anyone who worked for them who referred 3 people an Acura Nsx. Went out of business even before the crash.

Somehow he was able to submit this as a dissertation at MIT (!!!!) https://philip.greenspun.com/panda/

'Nuff said
 

Totoro

Member
Joined
Aug 4, 2019
Messages
94
Likes
67
Location
Boston, MA
IMHO, 'Agile' has been adopted too widely. It is most applicable to non-critical apps or web services. but much less so to critical system development.

There's also been a growth in the 'paradigm market', selling courses for methodologies. the original 'Agile' proponents have recently come out against the formal 'Agile' methodologies that have grown like topsy.


or
http://codemanship.co.uk/parlezuml/blog/?postid=1580

From the latter:



Or one of my comments:

I vaguely remember that the original authors of the "Agile Manifesto" publicly recanted. For a while the industry was inundated with humanities major "scrum masters" who subtracted value.

Nowadays it's humanities major PMs doing that (used to be PMs had domain knowledge minimally)
 
Last edited:

Neddy

Addicted to Fun and Learning
Joined
Mar 22, 2019
Messages
756
Likes
1,031
Location
Wisconsin
This is long past beating a dead horse, but I do find it kind of fun to look back on how PM looks from a distance now.

Agile, in it's basics, does have great value, and is NOT new, witness Lockheed Skunkworks, Bell, and more recently Musk's 'continuous improvement' - really nothing more than now to build new stuff, well, and fast, which just happens to apply very nicely to software development.

I witnessed the 'great flocking' to Agile/6 sigma training with despair: a very sad combination of 'group think/latest easy management sale trend' plus 'those who can't, Teach' (too many 'former' PMs found teaching "agile' courses a less stressful and nearly as lucrative career path).
With the result that most students (love that 'humanities major' comment, as rude as it is!) learned yet another 'recipe' to follow slavishly and with little real benefit to the end product/project...but got their "6" certification anyway!

Observing project after project technically meet goals and budgets (carefully re-defined over the course of the project, of course), and huge management victory dances), while creating kludgy, fussy, poorly defined/documented, overly complex & inelegant solutions (JRiver, anyone? that thing drives me nuts almost everytime I have to interact with it?) eventually drove me out to an early retirement.

That, and the growth of 'politicized/career chasing/risk adverse' management 'oversite' basically entirely killed the creative joy I was lucky to have enjoyed for the first half of my career.
(I lost count of the number of times I was lectured by senior management about my insistent use of Faster, Better, Cheaper as a project meme!, nor the number of times I was 'instructed' that I was clearly lying, as no PM ever had a 100% project success rate.)

However, perhaps the 'leaner', 'meaner' commercial environment today will allow that 'creative success' environment to thrive is some corporate/government environments.
I had many years of sheer joy leading/directing/coaching teams of creative developers to new types of architecture/UI's all while being deeply embedded in the customers 'society'.
Refining 'end goals and means' with the customer's management was an art unto itself, and was more often enjoyable than stressful.
Finally, and most important, at the end of the day, observing customer 'worker bees' truly enjoying their new platform for it's less stressful, more productive, more informed workday was the best reward.

Well, in any event, being a stubborn champion of innovation, minimized critical path, and customer appreciation/value will likely always be the 'tall nail needing pounding' in most organizations.

It is what it is.
 

Webninja

Senior Member
Forum Donor
Joined
Oct 8, 2018
Messages
419
Likes
469
Location
Los Angeles
Great post with so much that I agree with. I started as a programmer, moved to PM and now a product manager, all in Marketing/Ad industries. The one key ingredient to a successful launch/delivery is trust. I had so many internal stakeholders and clients simply not trust our team, mostly due to lack of knowledge.

This lack of trust then permeates all aspects of the project. The schedule, quality of code, design best practices, number of resources and so on. They always doubted what we recommended as the correct solution.

Unfortunately, we became exhausted defending even basic industry standards, so we often gave in, and built whatever specs the client wanted. At least we weren’t building mission critical software, but it is sad to see young engineers and designers realize how boxed in they really are.
 
Top Bottom