• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

GPT4 / CHATGPT Plugins for Speaker measurement analysis and comparison

Roddy

Member
Joined
Oct 14, 2020
Messages
42
Likes
27
Ooh, this is discussion is already devolving into an argument against 'Strong AI' and the difference between syntax and semantics...

Here's one I watched earlier:
This guy takes 17 minutes to get to the point, and his left audio channel is busted. His vids have been chasing me around Youtube for years, especially his famous paradox one. I doubt his original thumbnail mentioned chatgpt because this vid is 2.5 years old. The paradox one is almost like trolling. He keeps saying he'll explain what a set is, then moves on. He eventually does it 5 mins in. Talk about padding. I hope the algorithm leaves me alone now.
 
OP
Jeromeof

Jeromeof

Addicted to Fun and Learning
Forum Donor
Joined
Jul 25, 2021
Messages
623
Likes
995
Location
Ireland
Just to clarify, GPT4 uses the black logo, not the green one, and currently you have to pay for it. Although I don't know if the code-writing ability changed much from 3.5 to 4. It would probably explain itself more succinctly.
You do have to pay for GPT4 at the moment - not sure what OpenAI ultimate revenue model is but to get early access to stuff (and priority access to use their API) you pay $20 - I am currently subscribing but when the hype dies down and its integrated into lots of other apps I use I will probably stop paying
 

Roddy

Member
Joined
Oct 14, 2020
Messages
42
Likes
27
You do have to pay for GPT4 at the moment - not sure what OpenAI ultimate revenue model is but to get early access to stuff (and priority access to use their API) you pay $20 - I am currently subscribing but when the hype dies down and its integrated into lots of other apps I use I will probably stop paying
Stop me if you've heard this before, but I think the revenue model is to sell it below cost so that everyone uses it. Then when it's embedded everywhere, including schools and governments, they jack up the price. $20 is already cheap, but the API tokens are so cheap, you can buy 200 million prompts for $20. It'll practically be infrastructure, then 20 years from now, Congress will have debates over whether it's a monopoly that needs breaking up. Ultimately they'll do nothing and move onto the next moral panic. Just a hunch, though. I'll have to Google some historical examples on my Windows PC.
 

Axo1989

Major Contributor
Joined
Jan 9, 2022
Messages
2,894
Likes
2,949
Location
Sydney
Open AI just published some more analysis of their GPT model behaviours for anyone interested.

It's more a case of approximately mapping mechanisms than 'understanding' the model conceptually. With a very large parameter space (billions of token proximity relationships in GPT-3, trillions in GPT-4) analysis is computationally intensive. Some would say tedious. But it's interesting they quantify in terms of "human preferences between explanations".
 
Last edited:

RayDunzl

Grand Contributor
Central Scrutinizer
Joined
Mar 9, 2016
Messages
13,250
Likes
17,184
Location
Riverview FL
Open AI just published some more analysis of their GPT model behaviours for anyone interested.

From the link:

"Language models have become more capable and more widely deployed, but we do not understand how they work."

How can we not know how they work? (The programmers, not my somewhat inept neuron?)
 
Last edited:

Axo1989

Major Contributor
Joined
Jan 9, 2022
Messages
2,894
Likes
2,949
Location
Sydney
From the link:

"Language models have become more capable and more widely deployed, but we do not understand how they work."

How can we not know how they work? (The programmers, not my somewhat inept neuron?

That, or they can't write clearly. :)

I had the same reaction (among others) but it looks like they are talking about tracing the precise mechanistic steps that match what they (amusingly) call a neuron in their model to selecting a specific token (word, roughly) in the output text.
 

sam_adams

Major Contributor
Joined
Dec 24, 2019
Messages
1,000
Likes
2,435
Our nightmare has arrived! Ha ha!

Not until someone connects it to something that it controls but doesn't understand the consequences of its actions when it makes a mistake. Yes, Chat-GPT does make mistakes. It 'hallucinates' facts because it is not a true intelligence. It takes your input and iteratively applies probabilistic outcomes to it until it reaches a certain threshold of 'certainty' and then barfs out an answer.

Applying what is essentially a 'novelty' to scientific research, or to mathematics or code writing—when you have little or absolutely no knowledge of the subject matter—will quickly lead one to erroneous conclusions. Chat-GPT has already been tasked with programming and ≈50% or so of the code it wrote was insecure. Trusting this 'novelty' to generate something which one has no knowledge of how it is supposed to work would be a bad idea because if that code was used for something that needed to be threat-resistant or failsafe, it could turn out to be a bad day indeed.
 

DualTriode

Addicted to Fun and Learning
Joined
Oct 24, 2019
Messages
903
Likes
594
Some day your car will be smart enough to drive you home, even if your drunk self nods off on the way there.

Is that AI or what?
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,645
Likes
240,771
Location
Seattle Area
From the link:

"Language models have become more capable and more widely deployed, but we do not understand how they work."

How can we not know how they work? (The programmers, not my somewhat inept neuron?)
Because it is not procedural but data driven code. The actual code is rather small and has nothing interesting in it. It is the dataset that is massive (think billions of parameters). The data is filled in during training in a similar manner to neural networks (with optimizations to reduce size). Imagine if I give you a 1 million by 1 million matrix full of numbers. How exactly do you figure out what those numbers mean? You would know how you filled them (statistically) but you don't know what they do when the look up occurs in a different manner. You would have to probe the model to learn what the dataset really represents....
 
OP
Jeromeof

Jeromeof

Addicted to Fun and Learning
Forum Donor
Joined
Jul 25, 2021
Messages
623
Likes
995
Location
Ireland
So, thought I would give a small update to ChatGPT plugins and using them for some analysis.

They are still in a very early state but I believe more people will get access soon. I still haven't gotten access to the Code Interpreter plugin (which is the one I would mostly be excited for a pure data science and analytics perspective.

The browser plugin can do a reasonable nice job of using real data from this website:
Screenshot 2023-05-19 at 13.41.01.png


This is effectively just summarising 2 separate reviews - but once this is loaded into GPT4 'short term' memory you can ask questions against the data:

Screenshot 2023-05-19 at 13.46.11.png


I have used the browser plugin to do similar comparisons between speakers via www.spinorama.org - but the most interesting IMO is @pierre has done a nice job already of enabling a chatGPT plugin for the spinorama data. But the browser plug-in very early and buggy at the moment and I still don’t have access to the data science “code interpreter” plugin
 

Gruesome

Active Member
Forum Donor
Joined
Apr 21, 2023
Messages
177
Likes
181
Location
California
Maybe ChatGPT can not only write Amir's paper, but also sweet talk the JAES editors into waving the over-limit page fee?
 
OP
Jeromeof

Jeromeof

Addicted to Fun and Learning
Forum Donor
Joined
Jul 25, 2021
Messages
623
Likes
995
Location
Ireland
And hopefully to be clear IMO there are roughly 3 different approaches to using these tools in increasing useful ways:
1. Generate text - least useful IMO as it can / will definitely make stuff up - e.g. - I created a post with a Fake Review here:

2. Use them for summarisation and 'fine tuning' - this is what the browser plugin I used above can do - it is not make stuff up but has temporarily merged the content it extracted from the webpages / website and generates new text using that text as the main subject - so useful IMO as in its really is using up to date data.

3. Use them to generate the "code" (effectively small adhoc applications) and these small applications actually do data science work as if you have access to an entoustastic junior data scientist - whose "work" needs to be checked for errors / issues but when checked / fixed can be used to generate good reliable results and save a considerable amount of time
 

onion

Senior Member
Joined
Mar 5, 2019
Messages
343
Likes
383
Yeah I think the fixation by some on 1 (which does indeed result in probabilistic analysis and hallucinations) is blinding them to potential of 2 & 3
 

YSC

Major Contributor
Joined
Dec 31, 2019
Messages
3,205
Likes
2,605
If it behaves like just when you ask them anything about human, I bet it will just praise everything and find things in the data to say it's phenominal
 

Gruesome

Active Member
Forum Donor
Joined
Apr 21, 2023
Messages
177
Likes
181
Location
California
From my one voluntary interaction with ChatGPT, I was impressed by how its programmers/data feeders/care takers had integrated hard data into its repertoire (I asked it about yield strains for different grades of steel bolts), but also by how easily it confused itself with the meaning it had assigned to numbers/variables. Like, in one moment it would pick an arbitrary number as an example for yield stress, to illustrate the concept, and two sentences later it was mistaking that number for the yield stress in a computation. Impressive was again that it correctly parsed my comment pointing out this mistake, and corrected its calculations and spit out a corrected table of yield strains.

But unless a particular instance of this program has been configured to put a very high emphasis on not making stuff up, and a much lower emphasis on spitting results out to keep the 'conversation' going, I don't see how you can use a tool like this without cross checking every bit of output. From my interaction with a particular instance of one of these programs, I would fully expect it to make up correlations between say speaker sensitivity and predicted ranking if it 'got the impression' that was something you might like to hear.
 
OP
Jeromeof

Jeromeof

Addicted to Fun and Learning
Forum Donor
Joined
Jul 25, 2021
Messages
623
Likes
995
Location
Ireland
But unless a particular instance of this program has been configured to put a very high emphasis on not making stuff up, and a much lower emphasis on spitting results out to keep the 'conversation' going, I don't see how you can use a tool like this without cross checking every bit of output. From my interaction with a particular instance of one of these programs, I would fully expect it to make up correlations between say speaker sensitivity and predicted ranking if it 'got the impression' that was something you might like to hear.
That is exactly what a plugin does, for a given users 'session' it load real data into short term memory (called 'fine tuning') - the ChatGPT session is then much more focused on using real data and not the 'pre-trained' data when generating text, so it will give proper results.

But for the real data science, openAI have a special plugin which is not about chatGPT understanding the 'raw' data, its about chatGPT understanding how to use standard analytical data science tools to allow the user interact in english with the tools, and those tools then manipulate the data based on what you ask it. It is probably hard to visualise and hopefully OpenAI will make these plugins fully available so people can play with them.
 
Top Bottom