• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Using AI for Review Thread Summaries?

That would be extremely useful. I can get a developer to do the plug-in for us if the rest is available. We have so much knowledge here that is hard to sift through. What does it take to develop this?
Bingo !!! This would be wonderful.
 
We have a variety of AI tools at our disposal at my work (we have also been developing and using AI and ML for almost two decades in our products) and if used appropriately I find it useful for doing similar functions on analysis that the Amazon bit is doing. That said, the output's still reviewed, and it's only used for certain scenarios that wouldn't [yet] replace the need human analysis and message why it's important, but yes it can be quite useful and does a decent job.
 
...summarize the activity of the PierreV pseudoid user on the audio science review forum.
On your advice and a bit of curiosity, I asked ChatGPT the same question about my user name.
:mad: It kept referring to me as "They".:mad:
It either thinks I am a member of the LGBQT+ group or.... somehow I have to 'train' it to think that I am a female!:facepalm:
 
2) generative AI often produces content similar to existing content, "plagiarism"
When citing research myself I always worried that the line between referencing was making my writing “unoriginal”.

Take a room full of students and one or more AI assistants of your choice. Provide them all the same body of text and ask them to summarize or write a basic five paragraph essay.

If you find commonalities in the output of any between them would that mean that any of them are plagiarizing?

All are generated from their ability to understand the input given and analyze against their basis of knowledge or data.

You’d hope they are more alike than not, wouldn’t you?

My point is the use of AI assistants is not inherently plagiarism no more than programming or graphing calculator could be. It is HOW it is used and WHEN it is not credited that, yes, it is plagiarism.
 
You’d hope they are more alike than not, wouldn’t you?
That would be a WAG, dependent on so many unavailable and unknown factors.
Asking for a summary is not like pulling out an AudioPrecision that summarizes test results.
Have there been any conclusive studies to determine if - indeed - the outputs are 'more alike' than not.
 
On your advice and a bit of curiosity, I asked ChatGPT the same question about my user name.
:mad: It kept referring to me as "They".:mad:
It either thinks I am a member of the LGBQT+ group or.... somehow I have to 'train' it to think that I am a female!:facepalm:
Looks who’s talking… “it” :facepalm: You didn’t even ask, did you? :p
 
On your advice and a bit of curiosity, I asked ChatGPT the same question about my user name.
:mad: It kept referring to me as "They".:mad:
It either thinks I am a member of the LGBQT+ group or.... somehow I have to 'train' it to think that I am a female!:facepalm:
Non gender specific words (such as they) are routinely used when gender is unkown. Not only to do with non-binary gender ID.

EG :

Someone has left their coat behind.
Oh, I wonder where they are now.

User 564 has left a long ranty post.
Looks like they can't spell either.
 
That would be a WAG, dependent on so many unavailable and unknown factors.
Asking for a summary is not like pulling out an AudioPrecision that summarizes test results.
Have there been any conclusive studies to determine if - indeed - the outputs are 'more alike' than not.
Fair enough, but I was only addressing the claim of AI inherently being plagiaristic.
 
It's in the TOS:


Answer - ASR owns the rights, but you also retain your rights.

AFAICT ASR would not be required to give you royalties for any further use of your posts.

e: Saw that someone else answered already, my bad.

Anyway, I think it's an alright idea to summarize long threads using AI.

However, I would still urge caution. I have been using an AI tool to summarize meetings at work. It works pretty well, but I still review all the output manually. Out of ~5 meetings, in one of them it reported the opposite of what we actually agreed on in the meeting. The rest of the time it's spot-on, but that's how it gets you - a false sense of confidence.

The problem with LLM-based summaries is they don't have any way to self-correct on factual information. And when they report non-factual information, there's no obvious sign that it's wrong unless you happen to know it's wrong. It cheerfully and confidently outputs fact and fiction alike.
It cheerfully and confidently outputs fact and fiction alike.

That’s so incredibly human.;)
 
...somehow I have to 'train' it to think that I am a female!:facepalm:
What I was trying to get at was that it needs further training and at a minimum; it could have determined my sex gender, if properly paper-trained:
202407_ChatGPTRetarded.jpg

It may not even know the nuances between gender and sex (as I question, lately)?
Non gender specific words (such as they) are routinely used when gender is unknown. Not only to do with non-binary gender ID.
There lies a slight problem with English being the dominant language that LLMs are based on.
Pronouns are a bag of worms, when it comes to inter-language translations. Some languages are gender-specific (he/she), some generally are (le/la), others are not. Things get even more complex, where the plural pronouns - mostly - dispense with gender specificity.
English 'he' or 'she' would translate to a gender-less 'it' equivalent, elsewhere.
English has no plural for 'you'! Uze 'awl is as close as we get; in the southern states.:)
 
Last edited:
Amazon AI summaries are inevitably "Reviewers were generally pleased with the product, which is why the listing hasn't been deleted and recreated, but opinions were mixed on <every possible objective factor>"
 
Back
Top Bottom