• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

None of it matters anymore. None of it.

And there is Floe where 2 flutes, 4 saxophones, and 2 french horns keep in time with 1 synthesizer in 1981 proving we can all get along. :cool:

Ira Glass's cousin :)

I would argue that Peter Dennis Blandford Townshend's 1971 opus Baba O'Riley (semi-named for minimalist composer Terry Riley) proved (or at least provided additonal evidence for) that as well... ;)
 
Yeah, sorry about that! I felt the same way... :eek:especially since it was a Klipsch Heresy! :facepalm:
I mean, if she were a Quad ESL-57... that'd have been OK.
:cool:

PS Margo will always be at least an ESL-57 to me. :D
The first time I heard that album it changed my musical preferences forever. When I was younger I listened almost exclusively to classical music and instrumental jazz, and some choral music. In the early 90s a friend at work knew I was into audio systems and live recording, and asked me to help him "fix" the sound of his system. He was unhappy with how boomy and disconnected the bass sounded, and I told him experimenting with speaker positioning might help. He invited me to his house, and he demonstrated his dissatisfaction with one of his favorite songs, which was "Misguided Angel". I was immediately taken with Cowboy Junkies, that song, and the album, and later purchased every CD they had released. We fiddled with speaker positioning for about two hours, and improved it, but I convinced him he needed a couple of small subs to get the results he wanted. I became a fan not only of Cowboy Junkies, but of other groups with sparse acoustic and vocal arrangements. I even became a fan of Peter, Paul, and Mary after that. :)

I'll just have to forget the heresy of the Heresy.
 
Oh, I love the Cowboy Junkies. Mrs. H and I have seen them many times.
Memorably, the first time we did, in Plymouth, NH, Mrs. H bumped into (not quite literally, but close) Ms Timmins in the ladies room at the venue before the show. We've chatted with her a couple of times at meet 'n' greets after shows, and she's remarkably down-to-earth. In full disclosure, I would probably pay money to listen to her sing the contents of a metropolitan telephone book. Her voice depolymerizes my spinal cord (and I mean that in the most complimentary way possible).

Keeping this on topic (?!). Peter, Paul & Mary* did generate some a few fine sounding recordings. Here is (albeit at tawdry res) one of them. :)


* Who, interestingly, were in many respects the folk music boom equivalent of the Monkees (i.e., the Prefab Four) -- a manufactured group to cash in, quite literally, on the trend. See, e.g., https://500songs.com/podcast/episode-109-blowin-in-the-wind-by-peter-paul-and-mary-2/
 
I don't think that AI adds negatives... music was already floating belly-up with formulaic production, autotunes, oversexualization over talent, flavors of the week, reality TV hooks for relevance and eyeballs. I wouldn't mind if every major music studio feeding that garbage goes under, in fact I view that as an opportunity for real artists to carve themselves new relevance. AI *also* opens an opportunity for clear differentiation and quick discovery of truly talented artists. AI is just a tool that can be used in many different ways.

Let's face it... if AI can create better music than humans (who collectively now create horrible music under Music labels' management, for the most part), it's kinda sad for the art part of it, but confirmation that the music industry killed "music"... and it's an opportunity for human creativity to show it can beat AI repeatability.
 
1760480529168.jpeg
 
Or, to say it with ELO:

 
But then, this "decay into nothing" is limited by background noise in the listening room, isn't it?
If I listen at, say, 96 dB peak (which is subjectively quite loud), and background noise is at 20-30 dB, then there are "only" 66 to 76 dB left, or am I "thinking it wrong"?

If that is a normal measurement of noise 20Hz to 20kHz then it shouldn't be too hard to hear a decay to nothing. That measurement is the sum (RMS) of noise across that frequency range, not the noise at an individual frequency or small set of frequencies which is much lower. This fact is commonly misunderstood as you have so kindly demonstrated. Thank you!
 
If that is a normal measurement of noise 20Hz to 20kHz then it shouldn't be too hard to hear a decay to nothing. That measurement is the sum (RMS) of noise across that frequency range, not the noise at an individual frequency or small set of frequencies which is much lower. This fact is commonly misunderstood as you have so kindly demonstrated. Thank you!
What exactly (I allegedly misunderstood)? "Decay into nothing" of a note vs. decay into wideband noise is perceptively not the same. Even if you can hear below the noise "carpet", your hearing (and mine) will not be able to pick that one frequency and ignore the rest. If it could, SINAD would be just a "sidenote". Ignore both, yes we can, that's why we can partly "listen through" bad room acoustics - but this is no "serious" listening any more. A noisy room is not a good listening room (IMHO), in the same way as a noisy recording isn't a good one.
 
Last edited:
The limits of my hearing ability remain the same regardless of how the sound was originally produced, so if my sound reproduction equipment reaches/exceeds my limits, that's good enough for me.
 
The limits of my hearing ability remain the same regardless of how the sound was originally produced, so if my sound reproduction equipment reaches/exceeds my limits, that's good enough for me.
The individual limits even get more limiting with time... But this is a fact of life, we can mitigate it, but not avoid it.
 
J.S. Bach did do his part to maximize the dispersion of (his) genetic material...
Did he really wear that crazy wig when jammin with the family?
I don't think Elton John ever did?
But maybe?

Yeah, sorry about that! I felt the same way... :eek:especially since it was a Klipsch Heresy! :facepalm:
I mean, if she were a Quad ESL-57... that'd have been OK.
:cool:
Yep, and the lightning would have been dancing across the panel as the smoke arose thru the grillcloth. LOL
The Heresy's ROCK, the Quads never did or could.
The quickest way I know to turn the Rolling Stones into a bedtime story... :p

Oh, I love the Cowboy Junkies. Mrs. H and I have seen them many times.
I hope they are more exciting in person than on the Trinity Sessions.
I remember all the hype in the Audiophile press but when I got the album it was, buzzzzzzz.
Rock to fall asleep by. Yes it was very well recorded but not a single cut on it made me want to listen it again???
Unless it was to compare my listening notes to the writers ???

A little known secret: None of it ever did matter
Wrong

But... fidelity to what?
If a guitarist plays a guitar, and their fingers squeak on the strings... that is, for better or worse, part of the performance. The human part.
Right, and possibly as difficult for a Hi-Fi to reproduce accurately as anything.
50 Years of the Fender Guitar. ;)
 
I am batting 250!:facepalm:
If a guitarist plays a guitar, and their fingers squeak on the strings... that is, for better or worse, part of the performance. The human part.
But my AI can do that [human part] better than yours.:facepalm:

Music is not so binary anymore... than it was; when the human part used studio full of hardware to get the sound 'just' right.
 
So much music today is generated by AI, does it really matter if your DAC and amplifier have an SNR of >120dB? On a whim, I went to Suno and typed in the keywords that describe my favorite music genre and what it created was surprisingly good and only took a few seconds. I'd dare say it was downright excellent compared to what I could do with real instruments and my gruff voice. The music was clear, detailed, articulate and believe-it-or-not enjoyable through my HD-650 headphones and DX5ii. If modern music is simply generated from hundreds of samples leached off YouTube, Spotify, Apple Music, and Pandora, I argue this whole high-fi thing no longer matters any more. None of it. What are we trying to accurately reproduce exactly? Heck, with modern DSP mediocre speakers cand sound pretty darn good so what exactly are we striving to achieve?
For me, it matters because using a DAW lets me create and reshape existing music I love. I often make instrumental versions focused on drums and bass, mainly for my own enjoyment and learning.

I’m not really a musician by far:facepalm: (just play a bit of bass), but having access to multi-instrumental or open files online scattered around the web makes it easy to experiment. AI generated music can sound fine, but actually being part of the process even through remixing feels more rewarding to me.

I keep my edits strictly personal and non-commercial, though some material is copyright free or licensed for creative use, which allows sharing on platforms like SoundCloud as a personal expression.

I'm curious if other forum members does the same.
 
Last edited:
I am batting 250!:facepalm:

But my AI can do that [human part] better than yours.:facepalm:

Music is not so binary anymore... than it was; when the human part used studio full of hardware to get the sound 'just' right.
AI... binary. I see what you did there... ;)
 
For me, it matters because using a DAW lets me create and reshape existing music I love. I often make instrumental versions focused on drums and bass, mainly for my own enjoyment and learning.

I’m not really a musician by far:facepalm: (just play a bit of bass), but having access to multi-instrumental or open files online scattered around the web makes it easy to experiment. AI generated music can sound fine, but actually being part of the process even through remixing feels more rewarding to me.

I keep my edits strictly personal and non-commercial, though some material is copyright free or licensed for creative use, which allows sharing on platforms like SoundCloud as a personal expression.

I'm curious if other forum members does the same.
@Snarfie and like mined creators a lot of material to play with for not much coin.

 
@Timcognito; Seems/Sounds like you've been experimenting...:)
Not really anything that requires expert digitation skills is blunted by my dyslexia but I am a virtuoso of the radio and tablet in choosing exquisite sounds and was pretty good at sports in my youth. Surely you have noticed my abysmal typing skills. Those Humble Bundle sites sometimes have CAD software and other design bundles for cheap, my interest, but mostly programing, game and music production bundles. Everything I gotten there has been a great value plus much of their take goes to some very good charities.
 
Synthesizes music is free of ambient noise or distortion captured by a microphone. As such, its fidelity can be much higher than naturally recorded music, increasing demands to reproduce it more accurately.

Years ago, when making computers make "music":

I learned how to tickle the sound chip in my Commodore 64 successfully, for three voices.

Then I got a Mac, and couldn't figure out how to coerce its sound chip to make tunes.

So, using Think C, I located the file format for WAV files and synthesized tunes sample by sample, finally getting to the point where I had 12 voices in stereo with reverberation and probably some other effects, writing the WAV file just a little faster than playback speed, so no waiting fore results (other than code tweaking).

Then got a Windows 98, unable to figure out how to tickle its sound device, and continued with C.

Then quit. It was a fun experiment, learned a lot about digital sound.

There was, of course, no noise or distortions unless I added it.

Here's a couple I can still find:

They sound repetitive, but really aren't, as a counter goes from 0 to 63, increments the second counter (which also goes from 0 to 63), and the two numbers are OR'ed or XOR'ed or AND'ed or whatever else I could think of at the time to get the pitches from a 64 note pitch table for each of, sounds like, three voices here.

So, every "64 note bar" is different, for 4096 bars.


And another, same concept, different note picking algorithm from the counters:

 
Last edited:
You know... reflecting on the initialism for artificial intelligence... it might be time for serif fonts to make a comeback. On the other hand, maybe that Paul Simon song was ahead of its time...



AI ≠ Al ≠ AI Al

None of which is to be confused with... umm... steak sauce.

1760653867126.webp


... ahem... where was I?
Oh, yeah...


Surely you have noticed my abysmal typing skills.
That's a negative, @Timcognito. I've not noticed.
:cool:
 
Back
Top Bottom