• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Master AI (Artificial Intelligence) Discussion/News Thread

"trustworthy lawyers"

Yeah, but where to find them;)

lawyers.png
 
Just for the record, i'll rank them (Lawyers) still a lot higher in trustworthy-ness than politicians, now that's a low bar for sure, but still:cool:
 
Even if an app runs locally it could send data to the company and if its not open source no one can verify what is being sent to the mothership.

Detecting if some local software is sending data out is trivial.
Decrypting what it sends is often trivial, sometimes harder, but always possible on a computer you have local access to (that is why either as a defense contractor or a normal user you are always utterly toast if someone managed to establish a foothold on your machine).
Open source is nice, but isn't a silver bullet as people don't typically spend their time reviewing open source line by line. We have seen cases where essential open source applications themselves have been compromised. Even something as innocuous as a call to a random number generator for seeding keys may be a liability if the RNG has been biased in subtle ways. The implementation may be compromised, or even the standard on which the implementation is based. Look up Elliptic Curve DRBG

That being said, one of the most worrying thing in the current state of affairs is running local agentic AI that has access to your machine's file system, applications, and processes.
 
  • Like
Reactions: KLi
Just for the record, i'll rank them (Lawyers) still a lot higher in trustworthy-ness than politicians, now that's a low bar for sure, but still:cool:
Hmmm. It's not the job of lawyers to be trustworthy. It's the job of lawyers to fulfill the ethical responsibilities of their profession, which means advocating for their client without telling known lies or acting honorably as officers of the court. They fairly routinely fail at both of those standards, because both are designed to constrain them against the desires and incentives of their clients and careers.

Telling the truth is a universal ethical directive, present in just about every moral code that has been written down over the course of history. But people do tell lies, and do so routinely, even blithely.

We should not expect AI to be any more moral than we are, whatever codes we claim. But at least humans are capable of empathy and selflessness, which are concepts orthogonal to AI processes. As I said waaaay upstream in this thread, some portray AI as attaining the highest human ethical standard--essentially bettering the best humans--but in truth it will likely be worse than the worst humans, who will use it to further their malicious intentions.

What incentive is there for a politician to be moral and upstanding? When are they ever rewarded for taking a principled stand? But at least there is some accountability mechanism available to the people affected by their decisions.

Rick "who took an oath as demanding as the one taken by politicians, but with much greater accountability" Denney
 
Detecting if some local software is sending data out is trivial.
Decrypting what it sends is often trivial, sometimes harder, but always possible on a computer you have local access to (that is why either as a defense contractor or a normal user you are always utterly toast if someone managed to establish a foothold on your machine).
Open source is nice, but isn't a silver bullet as people don't typically spend their time reviewing open source line by line. We have seen cases where essential open source applications themselves have been compromised. Even something as innocuous as a call to a random number generator for seeding keys may be a liability if the RNG has been biased in subtle ways. The implementation may be compromised, or even the standard on which the implementation is based. Look up Elliptic Curve DRBG

That being said, one of the most worrying thing in the current state of affairs is running local agentic AI that has access to your machine's file system, applications, and processes.
It is very easy to detect what connections are being made by your local software...even block them. I am talking about desktop OS' not mobile...on mobile we don't stand a chance on android and ios where apps not only contact home but they are sharing your data with each other..

Open source can and have been compromised...BUT..when the code is open source it has way better chance of being detected...so if there is a backdoor sooner or later it will be found by the community. And at least that crowd takes privacy and security more seriously than the average user.

Having said that privacy and security does not always go together...out of the box iphones and macbooks etc are more secure than linux for the average user...because they are a very closed and tight system that outsiders can't get hold of your machine as you say and you are right about it.

Most people don't care about privacy or they don't even know what encryption is and how it is tied to their privacy. Stuff like what you are describing is one more reason for more people to turn on open source and privacy focused products.

As for the last part....I personally don't use AI for a lot of reasons...not installed any software or use it online....but I am afraid that in the near future it will be inevitable and we all will have to choose between technology and privacy.
 
And oddly enough a lot of politicians were previously lawyers.
It's not odd at all IMHO, and very helpful for many careers, but discussing it here is not the right thing to do.
 
It is very easy to detect what connections are being made by your local software...even block them. I am talking about desktop OS' not mobile...on mobile we don't stand a chance on android and ios where apps not only contact home but they are sharing your data with each other..

Open source can and have been compromised...BUT..when the code is open source it has way better chance of being detected...so if there is a backdoor sooner or later it will be found by the community. And at least that crowd takes privacy and security more seriously than the average user.

Having said that privacy and security does not always go together...out of the box iphones and macbooks etc are more secure than linux for the average user...because they are a very closed and tight system that outsiders can't get hold of your machine as you say and you are right about it.

Most people don't care about privacy or they don't even know what encryption is and how it is tied to their privacy. Stuff like what you are describing is one more reason for more people to turn on open source and privacy focused products.

As for the last part....I personally don't use AI for a lot of reasons...not installed any software or use it online....but I am afraid that in the near future it will be inevitable and we all will have to choose between technology and privacy.
On iPhone and MacBook you don't need a third party to drain your data .. :cool:
 
On iPhone and MacBook you don't need a third party to drain your data .. :cool:
Since the mysterious demise of Truecrypt, and the Snowden debacle, I have zero illusions.
As one of my profs used to say (in the 90s...): never put online anything you wouldn't want to become public.
Today even "better", since big data is "sniffing" people's devices without them realizing (or caring).

I use Linux for online duties, but even these PCs are free from any important or sensitive data. Just in case.
Only because I'm paranoid, doesn't mean they aren't out to get me ;)
 
Last edited:
On iPhone and MacBook you don't need a third party to drain your data .. :cool:
No...and apple make sure they are the only one that they have access to them. I am sure they drain your data just fine...

I was referring to security though....windows and android...are just terrible out of the box....out of the box mac os and iphones are the safest choices in my opinion when it comes to security.

I have ditched windows a long time ago and I only use mac os when I have too.
 
What incentive is there for a politician to be moral and upstanding?
I think there should be a change in the law to make misleading the public an offence by members of government/opposition.

I don’t know how democracy got to this point where lies are expected and tolerated in government.

If it were any other profession the individual would lose their job.

In the uk there is a fairly blurry set of ‘guidelines’ which can be interpreted to suit the party involved.

https://erskinemay.parliament.uk/section/5022/members-deliberately-misleading-the-house/
 
UK might have 'guidelines', but no constitution at all.
 
There must be some behavioural boundaries for USA government/opposition too. Will have a rummage on the internet
 
UK might have 'guidelines', but no constitution at all.
It's not all in one document, if that's what you mean, but most of the things that usually go in a constitution are present in different places. We haven't had one of the major changes that usually precipitates writing a constitution, except for a bit which we like to pretend didn't happen because Cromwell turned out to be just as bad as a king.
 
I'm messaging folks from Dolby, Apple, and Disney on linkedin to see if:

They have any teams working in modeling a listening space impulse responses using Deep Neural networks. NerF (Neural radiance fields) like architectures for acoustic modeling of listening spaces, coupled with visual input. How does the impulse response (consequently the secondary and primary reflections) change when certain things in a room move?

Haven't gotten any response so far...
 
why though ? This seems like a no brainer for these folks to be working on! Even Mitsubishi has published recent work on a related topic. There has been nothing as powerful as brute force deep learning in modeling a space/phenomenon.. sure, it will take a few tries but one can carefully train a large model to capture the phenomenon.

NerFs can be used to interpolate a space visually.. and then train a multimodal model (visual and acoustic) to learn to interpolate what happens if some things in the room move or removed!
 
Social impact.

“By January 2025, the family says he began discussing methods of suicide with ChatGPT.

Adam also uploaded photographs of himself to ChatGPT showing signs of self harm, the lawsuit says. The programme "recognised a medical emergency but continued to engage anyway," it adds.”


 
Last edited:
Back
Top Bottom