• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

How did software development become so hard (Windows)???

JW001

Active Member
Joined
Mar 22, 2019
Messages
100
Likes
112
Location
East Coast
...

Every major company I've worked for: Intel, Cisco Systems, Bell Labs, Freescale Semiconductor, AMD... they all use Unix-like systems for their servers and major engineering efforts. Without exception. Windows is used to get to HR systems and calendars, and similar work-a-day BS.

...
I work for a large corporation as well. I do my whole Java and C++ development on Windows laptop with 32GB memory, using Eclipse IDE. There is no reason not to in the days when virtualization software (VirtualBox) can run a Linux VM of my choice right on my laptop, and Chef and Vagrant software allows me to automatically build and test the whole system in minutes. Life was never easier for me.
 

MRC01

Major Contributor
Joined
Feb 5, 2019
Messages
3,502
Likes
4,144
Location
Pacific Northwest
When someone makes an ergonomic keyboard with buckling spring switches, I'll try it. Until then, the frustration of crappy switches far exceeds the comfort of better ergonomics. With buckling springs, the closest I can get to ergonomic is an 87-key TKL layout. This at least places the pointing device closer giving a more natural position for my right arm.

Cherry switches are better than bubble domes, but pale in comparison to buckling springs. Ask me how I know this ;) In fact, if anyone wants a top quality 87-key keyboard with cherry switches, PM me. I've got one or two of them laying around that I never use.
 

MRC01

Major Contributor
Joined
Feb 5, 2019
Messages
3,502
Likes
4,144
Location
Pacific Northwest
... Thus, I have never used anything but trackballs.
Currently, the 'huge' version of Japanese Elecom company.
I have an Elecom thumb trackball or two lying around somewhere. I'm not a lefty, but Elecom is the only company I know of who makes a left-handed thumb trackball. The Elecoms were decent, though I prefer the Logitech MX Ergo that I got a couple of years ago.
 

Grumpish

Active Member
Joined
Jul 2, 2021
Messages
148
Likes
144
The rot started with the object oriented paradigm, which was basically an idea to persuade badly trained programmers to write properly structured code and unfortunately coincided with resources (mainly memory) getting cheap. Programming in assembler (airline reservation systems and banking systems) was insanely intensive in people resources, but very efficient in hardware terms.
 

pseudoid

Master Contributor
Forum Donor
Joined
Mar 23, 2021
Messages
5,221
Likes
3,570
Location
33.6 -117.9
thumb trackball
It turns out that the digit we call thumb is not as dextrous/articulated [?] as the rest of our digits.
But not only because it is short one knuckle [?].
Also, perhaps, because we may share a common ancestor with apes that use their thumbs for gripping, grabbing, climbing, etc.

But according to David Carrier (an Evolutionary Biologist at the University of Utah) "Of all the motions the hand can perform, perhaps none is so distinctively human as a punch in the nose. Our thumb, which folds over the second and third fingers as a buttress, allows to concentrate the striking power and protecting the delicate hand bones."

What these prove is that the thumb otherwise used for a punch is not the best digit for controlling a delicate trackball. YMMW
Take a look at the younger crowd which grew up with smartphones. Their texting speed using their thumbs may puzzle David Carrier at the speed of human evolutionary process, which some may call progress.
 
  • Like
Reactions: JRS

MRC01

Major Contributor
Joined
Feb 5, 2019
Messages
3,502
Likes
4,144
Location
Pacific Northwest
IME, thumbs can be just as dexterous as other fingers, in their own unique way. I find that non-thumb trackballs stress my fingers & hands with extended use. As do mice. In contrast, I can use a thumb trackball all day long with no discomfort. And I find them faster and more accurate. Have been using them for the past 25 years or so.
YMMV
 

Chromatischism

Major Contributor
Forum Donor
Joined
Jun 5, 2020
Messages
4,824
Likes
3,757
Literally no one games with a trackball, so that says all I need to know about their speed and accuracy.

My personal experience with them ended with fatigue and I could easily see getting tendonitis from the overuse of that joint.
 

maverickronin

Major Contributor
Forum Donor
Joined
Jul 19, 2018
Messages
2,527
Likes
3,311
Location
Midwest, USA
A wireless trackball is great for controlling an HTPC from a couch or bed.
 

storing

Active Member
Forum Donor
Joined
Aug 27, 2021
Messages
226
Likes
220
The rot started with the object oriented paradigm, which was basically an idea to persuade badly trained programmers to write properly structured code and unfortunately coincided with resources (mainly memory) getting cheap. Programming in assembler (airline reservation systems and banking systems) was insanely intensive in people resources, but very efficient in hardware terms.
Huh? I can write object oriented in assembly. The rot started when programming became more popular and gained more languages. Larger audience inevitably means more mediocre and bad software as well. Not everyone is at the same level. Add to that bigger companies, more management which knows nothing about actual programming nor software design but only cares about (in their opinion) fancy needed features. Probably missing some other factors as well here, but my key point is blaiming it in one single thing (and OOP from all things) is wrong.
 

pseudoid

Master Contributor
Forum Donor
Joined
Mar 23, 2021
Messages
5,221
Likes
3,570
Location
33.6 -117.9
IME, thumbs can be just as dexterous as other fingers, in their own unique way. I find that non-thumb trackballs stress my fingers & hands with extended use. As do mice. In contrast, I can use a thumb trackball all day long with no discomfort.
That is the great part of the Elecom trackball offerings.
Yes, you can have it many different ways: Left or right-handed. Thumb or fingers-operated. Multiple configurable buttons.
I cannot live w/o the single button re-maps of Alt+F4 (Close Application), and Ctrl+F4 (Close ActivePage) commands.
I had been using the original Kensington trackballs for so long that I still have the reverse 8-ball (regulation 'pool' ball).
If anyone wants remember that far back; the original Kensington trackballs came with real precision (ABEC-grade?) metal bearings, before infra-red pointing devices were a thing. As good as the hardware was, Kensington has always sucked at the software-side and could never figure out how to properly write drivers for early Linux.
Few months ago, my mate wanted a white trackball to match her retro-white mech'l keyboard and white curved 27" monitor. I bought her a white Kensington Orbit w/ring, and their 'drivers' are still sucky, and barfed on a NUC8/Win10 machine. I hope she never finds out that the WinOS built-in generic drivers is what she is using. ;) But hey, it's all white and a solid WAF point for me.
 

MRC01

Major Contributor
Joined
Feb 5, 2019
Messages
3,502
Likes
4,144
Location
Pacific Northwest
... As good as the hardware was, Kensington has always sucked at the software-side and could never figure out how to properly write drivers for early Linux. ...
I've never seen a pointing device that needed drivers on Linux. Plug & play, it works. You can program the sensitivity, acceleration, buttons & features using standard Linux utilities like xinput, imwheel and xbindkeys.

That said, trackball vs. mouse is like boxers vs. briefs. Far be it from me to tell others what to use. That's what I meant by YMMV above.
 

MRC01

Major Contributor
Joined
Feb 5, 2019
Messages
3,502
Likes
4,144
Location
Pacific Northwest
Huh? I can write object oriented in assembly. The rot started when programming became more popular and gained more languages. Larger audience inevitably means more mediocre and bad software as well. Not everyone is at the same level. Add to that bigger companies, more management which knows nothing about actual programming nor software design but only cares about (in their opinion) fancy needed features. Probably missing some other factors as well here, but my key point is blaiming it in one single thing (and OOP from all things) is wrong.
I agree. Every few years, new programmers get further from the hardware, working in progressively higher levels of abstraction. This can promote productivity, as one can write apps faster. But it also promotes not only ignorance of how to optimize CPU and RAM, but the dangerous yet popular attitude, "optimizing is the job of the ____" Choose your term: compiler, packager, programming environment, operating system, etc.

Some of you may have read this little book: https://www.amazon.com/Programming-Pearls-Press-Louis-Bentley/dp/0201103311
Those who have, will smile. Those who haven't might enjoy it.

Nothing wrong with OO programming. It highlighted some important ideas like abstracting & modeling not just data, but also logic/behavior. And organizing code for encapsulation, making dependencies explicit and minimizing them. Of course good programmers had already been doing this for decades, but OO came along and popularized it, which was good. Sadly, too many programmers read those books and missed the point.
 

storing

Active Member
Forum Donor
Joined
Aug 27, 2021
Messages
226
Likes
220
I agree. Every few years, new programmers get further from the hardware, working in progressively higher levels of abstraction. This can promote productivity, as one can write apps faster. But it also promotes not only ignorance of how to optimize CPU and RAM, but the dangerous yet popular attitude, "optimizing is the job of the ____" Choose your term: compiler, packager, programming environment, operating system, etc.
While there is some truth in these principles I'm wary of making this generalised. You see, you used 'can promote' for productivity but just 'promotes' for the ignorance and attitude. I'm not sure if you intended to do that but written as such, to me this sounds a bit close to certain sentiments like 'high level languages don't make real programmers' and 'real programmers know how CPU and RAM work' etc. I'm exagerating somewhat here, but mainly to highlight some points I don't generally agree with. And again: I'm not sure if you were pointing in that direction.

E.g. I know programmers who can write large-scale software which is plenty fast and has a proper architecture (in my opinion the part of software which is actually the most difficult) but who wouldn't be able to tell you much about what a function is doing if you'd show them the disassembly and live CPU registry values.

With respect to optimizing in reference to compilers: there have been whole teams spending years on just the optimizer of modern compilers. These people know CPU architecture by heart and are usually extremely good at what they do. So, except for extreme cases or bad compilers: the attitude of having the optimizer do its job is in my opinion the correct and practical one, because I know for a fact I probably won't be able to outsmart the people who do this job for a living.
 

q3cpma

Major Contributor
Joined
May 22, 2019
Messages
3,060
Likes
4,420
Location
France
Nothing wrong with OO programming. It highlighted some important ideas like abstracting & modeling not just data, but also logic/behavior.
What's wrong is not OOP per se, it's
1) Forcing OOP all the time even when data oriented programming makes more sense; in fact forcing a paradigm on the user (be it functional in case of Haskell or OO in case of Java/Smalltalk) is not what programming gurus want; Common Lisp or Tcl are good examples of languages that give you tools to elegantly use multiple paradigms but don't force you to choose one.
2) "I invented the term Object-Oriented, and I can tell you I did not have C++ in mind." – Alan Kay. Smalltalk/Self and CLOS are better than Simula style OO.
 
Last edited:

q3cpma

Major Contributor
Joined
May 22, 2019
Messages
3,060
Likes
4,420
Location
France
I'm not sure if you intended to do that but written as such, to me this sounds a bit close to certain sentiments like 'high level languages don't make real programmers' and 'real programmers know how CPU and RAM work' etc. I'm exagerating somewhat here, but mainly to highlight some points I don't generally agree with. And again: I'm not sure if you were pointing in that direction.
But that's right. You're not a "real programmer" if you push buttons on a black box to get what you want without knowing (different from knowing and not caring) about the implementation details. Same way that using a 3D printer doesn't make you a machinist.
E.g. I know programmers who can write large-scale software which is plenty fast and has a proper architecture (in my opinion the part of software which is actually the most difficult) but who wouldn't be able to tell you much about what a function is doing if you'd show them the disassembly and live CPU registry values.
Architecture is important, but it clearly doesn't replace programming ability. Which is why CS teachers and "software architects" exist.
With respect to optimizing in reference to compilers: there have been whole teams spending years on just the optimizer of modern compilers. These people know CPU architecture by heart and are usually extremely good at what they do. So, except for extreme cases or bad compilers: the attitude of having the optimizer do its job is in my opinion the correct and practical one, because I know for a fact I probably won't be able to outsmart the people who do this job for a living.
Most of these so-called programmers use dynamically typed languages with not AOT compiler support (JIT is fine and dandy, but you don't have the same extensive whole-program optimisation opportunities). They're also garbage collected, which puts additional requirement on the user's knowledge if he doesn't want to suffer from random latency.
And since it looks like we're talking about C/C++, you need to understand the compiler to write high level code that'll be well optimised. Otherwise, we get the current situation: the vast majority of people using those languages don't even know what volatile or restrict mean, they don't know what an UB is, never read parts of the spec and produce unportable code.
 

MRC01

Major Contributor
Joined
Feb 5, 2019
Messages
3,502
Likes
4,144
Location
Pacific Northwest
...
With respect to optimizing in reference to compilers: there have been whole teams spending years on just the optimizer of modern compilers. These people know CPU architecture by heart and are usually extremely good at what they do. So, except for extreme cases or bad compilers: the attitude of having the optimizer do its job is in my opinion the correct and practical one, because I know for a fact I probably won't be able to outsmart the people who do this job for a living.
Too many developers take this too far, expecting the teams who optimize platforms & compilers to make up for their own inability to optimize their code. Some system limitations are inherent, good programmers need to understand those limitations and find solutions that work around them.

Optimizing the compiler or system tends to give incremental gains; optimizing your solution to the problem space gives order of magnitude gains. Some examples of what I mean by this:

As part of a web server, we had to match billions of incoming strings against white/black lists having thousands of entries each. I suggested using the Aho-Corasick algorithm. The developer implemented it in Java and it wasn't any faster than brute-force string matching. This smelled fishy so I profiled the code. Turns out the developer implemented it with java.util data structures (because why not? it's up to the JVM developers to optimize their container libraries). The code spent more than 90% of its time auto-boxing chars, and that was only the tip of the inefficiency iceberg. The developer didn't know that (A) Java can only put objects (not primitives) in containers, and (B) objects are very expensive, especially ephemeral ones, and (C) autoboxing adds even more unnecessary operations. I rewrote the same algorithm eliminating objects (Java arrays can take primitives which eliminated all the object/string/autoboxing overhead) and with no other changes it ran 20 times faster in the short term, and 50 times faster in the long term, as when garbage collection kicked in, it didn't need to clean up hundreds of millions of tiny objects.

We needed to read input from about 20 different sockets and merge the data. The developer (in C) implemented a multithreaded approach, one thread per socket. So far, so good, it was an improvement on the single threaded version. But he didn't understand how expensive heap operations can be, under certain conditions. I'm talking malloc/free, not new/delete. Changing the code so it cycled through a ring of buffers that were allocated once at startup, eliminating all subsequent malloc/free calls, made it more than 10 times faster.
 

storing

Active Member
Forum Donor
Joined
Aug 27, 2021
Messages
226
Likes
220
But that's right. You're not a "real programmer" if you push buttons on a black box to get what you want without knowing (different from knowing and not caring) about the implementation details. Same way that using a 3D printer doesn't make you a machinist.
So where do you draw the line then? I mean you can comprehend everything which goes in a PIC1684 (just to name something), but how many people can say the same for a modern CPU? Or even a modern microcontroller? I can't, can you? How many people can truly say they don't treat at least a part of the device they program as a black box? I for sure am not going to waste time on even trying to know everything I touch inside out because that time would have close to zero return. I don't need to (though I do, roughly) know how my electric dill works to be able to put it to full use, just like I don't need to know the function of each and every register in a CPU to be able to get things done properly. tldr; sure some level of insight works, but we shouldn't lie to ourselves thinking black boxes are bad per se.

And since it looks like we're talking about C/C++, you need to understand the compiler to write high level code that'll be well optimised.
Only to some small extent, for the rest I don't agree, from experience and because of what I mentioned previously: good luck trying to outsmart the optimizer writers. Might depend on the particular application though or maybe my code by itself was already spot on, not sure, but having spent in total many weeks of my career trying to optimize C++ code running audio and video processing: the cases where I could actually make things better were rare and even then it was a matter of a couple of percentages increase. Now of course that's starting from code which doesn't do stupid things like passing everything by value or so, caching a lot of allocations, but those are basics of the language, one doesn't need to know what the compiler does with that.

Otherwise, we get the current situation: the vast majority of people using those languages don't even know what volatile or restrict mean, they don't know what an UB is, never read parts of the spec and produce unportable code.
Honest question: do you have numbers to back up that 'vast majority'? At least when we're hiring we've never encountered this, makes me wonder where these people are.
 
Top Bottom