Gentlemen!
Ok, I edited post
Gentlemen!
Ok, I edited post
I find it very relevant. How could it not be? If you want to use a different description you'll still be talking about the same issue. When certain things are deemed off limits or unacceptable it will effect the results from an AI. No different than bias effects people's results answering questions.
Yes, really. At least in people I know it seems like the best most succinct description. I try very hard to give the benefit of the doubt. I'll drop the discussion of this as I don't want to make it political or problematic.
Some Twitter users began tweeting politically incorrect phrases, teaching it inflammatory messages revolving around common themes on the internet, such as "redpilling" and "Gamergate". As a result, the robot began releasing racist and sexually-charged messages in response to other Twitter users. Artificial intelligence researcher Roman Yampolskiy commented that Tay's misbehavior was understandable because it was mimicking the deliberately offensive behavior of other Twitter users, and Microsoft had not given the bot an understanding of inappropriate behavior. He compared the issue to IBM's Watson, which began to use profanity after reading entries from the website Urban Dictionary. Many of Tay's inflammatory tweets were a simple exploitation of Tay's "repeat after me" capability. It is not publicly known whether this capability was a built-in feature, or whether it was a learned response or was otherwise an example of complex behavior. However, not all of the inflammatory responses involved the "repeat after me" capability; for example, Tay responded to a question on "Did the Holocaust happen?" with "It was made up".
Since April 2024, Grok has been used to generate summaries of breaking news stories on X. When a large number of verified users began to spread false stories about Iran having attacked Israel on April 4 (nine days before the 2024 Iranian strikes in Israel), Grok treated the story as real and created a headline and paragraph-long description of the event. Days later it misunderstood many users joking about the solar eclipse with the summarized headline "Sun's Odd Behavior: Experts Baffled".
This is precisely what makes such approaches less objective and therefore less useful in an objective sense. One could argue that AIs catering to societal trends -prioritizing something other than pure facts because it’s considered unfashionable, might offend someone, or is deemed risky -diminishes their value.Others like Microsoft (or Alphabet, or Apple and so on) aren't going to do that. They'll have to deal with societal norms in order for their 'AI' products/services to be acceptable and useable for broad demographics in the US and globally. To paraphrase your post, certain things are off limits or unacceptable. Which means design interventions and value judgements. What else can they do?
I appreciate that. The issue is that Suleyman still speaks of AI in the generic way that OpenAI, etc. deliver (for the most part). I gave the example of him arguing with AI chat bot on what movie to watch.I believe the goals of the two companies to be competely different.
MS has to produce and inject AI carefully into its offerings so it augments the products, not compete with them.
May I suggest you're thinking of it as if it can think... it can't. Providing related information and topics to explore is pretty standard and doesn't mean it's pushing any kind of specific agenda. It's just to you it may seem that way. Climate change is one of those things that's backed up by large amounts of factual research data.
This is precisely what makes such approaches less objective and therefore less useful in an objective sense. One could argue that AIs catering to societal trends -prioritizing something other than pure facts because it’s considered unfashionable, might offend someone, or is deemed risky -diminishes their value.
An AI influenced by trendy or political motives is likely not what humanity truly needs or wants.
Not to dwell on that subject overly, but you are correct of course. Unless one is focussing on a narrow technical question or function, generating text about renewable energy while avoiding discussion of climate change issues (being a significant factor in renewable energy economics, rationales and strategies) would require some odd constraints and/or distortions (which would be political, not scientific). Arguing for 'AI' without guardrails, then insisting on avoiding certain subject matter is inconsistent nonsense.
We don't disagree. But don't conflate societal norms with trendy motives, that's a straw man and an oversimplification.
Selective facts are certainly problematic, but 'pure facts' are also insufficient. For example, most societies posit as normative that slavery or genocide are unacceptable. We won't go into detail on this forum, but suffice to consider that one social group can gain competitive advantage by eliminating another, or that fully functional economies can be built on slave labour. Acceptability or otherwise is primarily based on morals and ethics, rather than facts. A probabilistic text generator can refer to those things in context obviously, but Microsoft and others won't let their 'AI' products advocate those things, and the necessary applied constraints are normative, not factual. It's impossible to avoid value judgements when developing and implementing functional 'AI' products in our societies.
So we have two broad problem areas. Firstly, there is no comprehensive training corpus comprised of 'pure facts' to start with. Secondly, some normative constraints will be applied necessarily, requiring value judgements, some of which will be contested.
That's engineeringly impossible.When AI is used as a tool for search and information, it should be free of any restrictions or biases related to ideologies
On one hand, this is a chilling thought....It's not a tool for facts. It's a tool for well-formed sentences trained on what's on the Internet. ...
For everybody who considers AI output potentially trustworthy, have a think about what happens for non-English use cases...
Basically to save you the research effort - you get different answers. If your native or sole language is not heavily used on the Internet, there is less material for competitive training of transformers etc. and fewer examples for building probability algorithms from. The result is significantly different answers from the English AI tools.
It's the truth. A friend builds analytical tools wrapped around RAG and AI APIs. He has real challenges with non-English languages. Spanish, French, Portuguese etc. are not so bad but smaller languages give very different results.Huh, is this right? I would have assumed it searches for answers across all languages, analyzes the consensus, and then presents the result in the language you used to ask the question.
It does NOT search. It absorbs and then trains. It's not a search engine.searches for answers across all languages
Wow, that must require an enormous amount of data storage!It's the truth. A friend builds analytical tools wrapped around RAG and AI APIs. He has real challenges with non-English languages. Spanish, French, Portuguese etc. are not so bad but smaller languages give very different results.
It does NOT search. It absorbs and then trains. It's not a search engine.
I guess this operates quite differently from what I expected. It must be incredibly resource-intensive!It cannot be multilingual when training. It can only train in one language. Otherwise, if how can it compare one sentence in English and another in Serbian? Imagine you are teaching a six year old and you answered each of their questions in a different language, how would they learn.
Different reply is fine. This is far more than that. It is like CoPilot is not even trying to answer the question. Or on purpose is attempting not to.
Both CoPilot (Bing) and ChatGPT do use search engines as a source to further train on, if the answer to the query isn't known or if more up to date information is needed, like current events for example.
JSmith