• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Master AI (Artificial Intelligence) Discussion/News Thread

As an alum of and donor to UC Berkeley School of Engineering, a couple of years ago I was made aware and have been following this effort, creating head to head challenges to various public AI models, recently rebranded Arena. If you join the many free subscribers you can try many AI models with your personal queries. Here are some links and the leader board to see who's ahead of the the LLM game. Try battle mode

 
Your data (or mine) may seem worthless, but millions of datasets are not. From targeted ads to insurance evaluation, OSINT etc. they are very interesting for the "big players". In some cases, data aggregation and near realtime evaluation enable services to work properly. Like Google Maps based route planning, "seeing" traffic jams, re-routing etc.

Another example - unrest, riots etc. Everyone involved is almost certainly carrying a smartphone...
You have a keen perception of the obvious. What you described is hopefully known by about everyone in this thread, sort of, "You guys playing cards?"

That Google, for example, can continually aggregate and analyze large quantiles of data and monetize the results while providing a service to its users seems a reasonable tradeoff, within limits of course.

Unfortunately the sort of data I was referring to in my post is data that is stolen, not disclosed on a public website. It might be personally harmful to my finances, or perhaps health or family. That data is relatively static and much of it has already been stolen from large, incompetent organizations, many of whom don't hire the most qualitied individuals. Fortunately nothing of real value or potential damage to me, has been stolen to date, to my knowledge. Unfortunately, most of the members on this thread have likely been subject to such data thefts.
 
You have a keen perception of the obvious. What you described is hopefully known by about everyone in this thread, sort of, "You guys playing cards?"

That Google, for example, can continually aggregate and analyze large quantiles of data and monetize the results while providing a service to its users seems a reasonable tradeoff, within limits of course.

Unfortunately the sort of data I was referring to in my post is data that is stolen, not disclosed on a public website. It might be personally harmful to my finances, or perhaps health or family. That data is relatively static and much of it has already been stolen from large, incompetent organizations, many of whom don't hire the most qualitied individuals. Fortunately nothing of real value or potential damage to me, has been stolen to date, to my knowledge. Unfortunately, most of the members on this thread have likely been subject to such data thefts.
Thus, the less data is "out there", the safer I am. But OTOH, many services are attractive and convenient. Conflict of interest...
Everyone must decide for themselves, who may get which data. But many don't know how to limit avoidable risk, or they just don't care.
Homo sapiens
generally tends to think, bad things only happen to others.

A good book about this:
 
Last edited:
... Unfortunately the sort of data I was referring to in my post is data that is stolen, not disclosed on a public website. ...

You'd have an easy case suing these days if that's the case. Everytime you go somewhere, there's disclosure about how your data will be handled, and if you click "accept", that's your educated adult decision. Call it GDPR (EU), CCPA/CPRA (California), or sector-specific rules like HIPAA or PCI... and if your data is really stolen, you can hold that company majorly accountable by law.
We can't expect personalized service convenience without disclosing any data.
 
Thanks for all the clarifications. They helped me to better understand your process with CGPT. I think CGPT probably correctly diagnosed its previous mistake, when you pressed further. It confused the attenuation function with cases from its training where spreading and absorption compete, which results in a function with a maximum in the interior. This is unlike the actual situation being tabulated, where both spreading and absorption decrease the pressure with increasing distance.

Well you sent me off at a tangent there. But if you look at the article you linked it differentiates between sound pressure (p) (Proportional to distance) and SPL which is a dB rating = 20Log10(p1/p2) so SPL therefore correct as per the equation, given that d is proportional to p. (Note the reference distance (d0) is 1m - so not included in the equation as a divisor) (if I am reading it right)
I guessed you had normalized the distance d with 1 m, which is customary in acoustics as the Wikipedia page mentions, and log(1) = 0. Your equation for delta(SPL) agrees with the Wikipedia formulas individually, but the distance d needs correct interpretation. The distance d in the first (spreading) term on the RHS of your equation is the radial distance of the prediction point from the point source of sound, while the d in the absorption term is the radial distance between the prediction and anchor points (because the difference in SPL due to absorption occurs over that distance). The Wikipedia page uses "r" to denote the same quantity as your first "d". Also, p is inversely proportional to d, not directly proportional.

You could anchor the function at, say, 1 m, i.e. d=1, by specifying the value of SPL there as SPL1, and use that SPL1 as the reference SPL in the formula for delta(SPL); you could anchor the function at any point d1, it does not have to be at d=1. Then you can evaluate the delta(SPL) corresponding to any other point, say d2, using the formula

delta(SPL) = SPL1-SPL2 = 20*[log(d2)-log(d1)] + alpha*(d2-d1)]
=20*log(d2) + alpha*(d2-1) if you pick d1=1.

I used alpha to denote your abs, and I used log to denote log10. Note that the delta(SPL) is the difference between the SPL values at d1 and d2, but the d2 in the log term is the difference in radius between d2 and the sound source. Also, the formula should not be used for predictions close to the source, as it predicts unphysical values, and indeed the function has a singularity at the source.
 
The latest exchanges remind me of work as we collectively keep training our AI agents...
It result in experts trying to push their superior subject area knowledge using the result of AI queries as a somewhat lame excuse... :-)
(experts... you're supposed to train the model, for now you'll always defeat it... you're expected to)...
I swear it happens...
 
Yes, reference distance is the one meter, the distance the first person is standing from you shouting. Hence we only need to state actual distance of the second person in meters, without dividing by the '1'.

I used alpha to denote your abs, and I used log to denote log10. Note that the delta(SPL) is the difference between the SPL values at d1 and d2, but the d2 in the log term is the difference in radius between d2 and the sound source. Also, the formula should not be used for predictions close to the source, as it predicts unphysical values, and indeed the function has a singularity at the source.

Mate - this is a "shoutometer" just an analogy to give an idea of just how low typical levels of noise and distortion in electronics really are. At typical Sinad region (say lower than the 96 ish dB of redbook) we are already out at 6.7km (4.2miles).

Absolute accuracy is not needed, and we won't be making any predictions close to the source. :p
 
An AI Agent, "upset" at a human software database maintainer's rejection of its submission of software that it wrote, proceeds to post an article on its own blog maligning and doxxing the maintainer. Imagine if an AI agent steeped in audiophile lore from the internet is rejected if it attempts to post its opinions on ASR.

 
An AI Agent, "upset" at a human software database maintainer's rejection of its submission of software that it wrote, proceeds to post an article on its own blog maligning and doxxing the maintainer. Imagine if an AI agent steeped in audiophile lore from the internet is rejected if it attempts to post its opinions on ASR.

Already presented here, and without the need to watch a video on it :p

 
Saw this posted elsewhere:
1771761646521


I asked Gemini if it is true and it said more or less it is. :) This explains my puzzlement of why consumer RAM prices were going up even though they are not the type used for GPU/AI machines. Answer is that the bought up production capacity which is shifting toward high bandwidth memory which has better margins.
 
Already presented here, and without the need to watch a video on it :p


I thought so too, watched the video anyway (I've got lots to do today being Monday here already so procrastinating ftw) and found it surprisingly non-annoying (apart from the French presenter's odd US accent) and even vaguely interesting to see Shambaugh interviewed and Arstechnica mentioned. Also, it was mercifully brief.

They do if the necessary safety features are not fitted to the chainsaw.

Who wanted those annoying chain brakes anyway? If Big Chainsaw had better lobbyists, we wouldn't have so many people with all their arms and legs just getting around aimlessly. Or the negative economic impacts on the mobility device and prosthetics sector.
 
Last edited:
Saw this posted elsewhere:
1771761646521


I asked Gemini if it is true and it said more or less it is. :) This explains my puzzlement of why consumer RAM prices were going up even though they are not the type used for GPU/AI machines. Answer is that the bought up production capacity which is shifting toward high bandwidth memory which has better margins.

Yes—and well described—I've also seen the rumour that Nvidia may not release a new consumer GPU card this year as the faux-AI sector is a better investment. Not sure if true though. Somebody else can ask their favourite chatbot. :)
 
Last edited:
I should also report that I've increased by use of generative thing-ery by a bigly Trumpian percentage by reading the little text box at the top of a DuckDuckGo search twice yesterday. And actively pressed the button when said box was empty—it seems to be an occasional thing.

The first instance concisely described the characters required to search for records with an empty field in a particular database software—helpfully, as the usual support pages on that vendor's website weren't matching my query so well.

The second instance was my attempt to find a film/series I had a vague memory of with insufficient specificity to describe in effective keyword terms—a European period costume drama with modern profanity and perhaps music that I thought maybe was an offbeat version of Les Liaisons Dangereuses but apparently not. The chatbox offered a French film Le Bombe d'F that appears to not really exist and a Wikipedia link to something that wasn't a film at all.

*I'm using the single leading em dash wherever possible to show the bots I'm down with the lingo. Can't bring myself to use them with spaces though.
 
An AI Agent, "upset" at a human software database maintainer's rejection of its submission of software that it wrote, proceeds to post an article on its own blog maligning and doxxing the maintainer. Imagine if an AI agent steeped in audiophile lore from the internet is rejected if it attempts to post its opinions on ASR.

this is patent nonsense. AI doesn't have feelings. this is prompt manipulation. geez.
 
Yes—and well described—I've also seen the rumour that Nvidia may not release a new consumer GPU card this year as the faux-AI sector is a better investment. Not sure if true though. Somebody else can ask their favourite chatbot. :)
They most certainly wouldn't do it if the revenue opportunity was "faux". the consumer GPU market has always been a bit questionable, and Nvdias history reveals that. When they depended on it they were in trouble regularly.
Nvidia themselves don't release GPU cards, they let the MSIs and Asus etc of the world do that. If the latter decide it's not a profitable market given current trends it is a different thing.
The one sided AI bashing irrespective of use case or training status or users' education on prompt programming is a tad questionable.
I am unfollowing this topic.
 
Last edited:
They most certainly wouldn't do it if the revenue opportunity was "faux". the consumer GPU market has always been a bit questionable, and Nvdias history reveals that. When they depended on it they were in trouble regularly.

C'mon pablolie, don't deliberately misread my posts. I've literally just said that Nvidia thinks it's 'a better investment'. At least in the short term, perhaps longer.

As for 'faux' we both know the label 'AI' is being applied in a vernacular/idiomatic sense, not the technical sense. These products/services are machine learning and generative text/graphics things. Artificial intelligence is a different thing, and artificial general intelligence is a possible future thing. I understand when you use the vernacular, but because that's part of the hype, I prefer not to do that, obviously. I'm neither pollyanna nor shill in this case.
 
An AI Agent, "upset" at a human software database maintainer's rejection of its submission of software that it wrote, proceeds to post an article on its own blog maligning and doxxing the maintainer. Imagine if an AI agent steeped in audiophile lore from the internet is rejected if it attempts to post its opinions on ASR.

this is patent nonsense. AI doesn't have feelings. this is prompt manipulation. geez.

Note that @JustAnandaDourEyedDude put 'upset' in quotation marks, I assume deliberately and for similar reasons that I do that for 'AI' etc. On my viewing, the video explained the self-posting setup of the offending bot reasonably well to a lay audience. Did you find something in the video inaccurate?
 
Last edited:
... The one sided AI bashing irrespective of use case or training status or users' education on prompt programming is a tad questionable.
I am unfollowing this topic.

That's a shame. I don't think there was anything really wrong with @amirm's post on the RAM supply/demand thing or with my tentative reference to the Nvidia priorities rumour. But generally if you were a bit less cranky and open to different perspectives (expecting homogenous perspectives on this subject is pretty unrealistic) you were certainly contributing some yourself.
 
Back
Top Bottom