• After 15+ years, we've made a big change: Android Forums is now Early Bird Club. Learn more here.

Is "generative AI" incompatible with AF rules?

Hadron

Spacecorp test pilot
Aug 9, 2010
29,704
1
28,462
Dimension Jumping
All of these "AI" systems that are in the news now, whether they generate speech of visual images or whatever, have been trained by web-scraping, and we can be pretty certain that this has been done without regard to consent or copyright (e.g. some of the image generators can be prompted to reproduce copyrighted images in some detail without detailed descriptions, even in some cases including the copyright holder's watermarks, which is something of a giveaway). Lawsuits have started to emerge, and probably there would be a lot more but for the secrecy about what's in the training samples.

In light of this I was wondering: the forum rules here do not allow linking or support for pirated material, and it's clear that many (I suspect all) of these systems have been based on some degree of data piracy or non-consensual appropriation. So strictly speaking, might linking to them or posting their output be a contravention of the forum rules? ;)

I put a wink there because in the context of this forum I don't think this is a major issue. But in a broader context I'm less sanguine.
 
I think at this point we have to use our judgement, though I'd like to have @Rob check in on this as he mentioned being heavily involved in some AI stuff (and it is his site).

I'm not a lawyer, but this is my opinion on the matter, and how I'm thinking on it.

If someone is presenting what is clearly someone else's work as their own, we should report. If someone posted a video of an AI recreation of the Celtics Game 7 game, that would probably be an issue. If someone asked AI to describe the game, and posted it as something like "I asked Bard to summarize the game and this is what I got" then I'd lean towards that being OK.

As with all things, if you have concerns, report it. We'll discuss it and try to make a reasonable resolution. Unless @Rob checks in with some more sound legal advice. :)
 
Upvote 0
I wasn't really expecting AF to ban any content produced by these things. More that I wanted to flag up that they are actually built on digital appropriation and a massive scale, in case people hadn't really thought about what lies behind them. Posting that anything at all produced by them might therefore be regarded as an infringement of AF rules was more a whimsical way of flagging up that out in the real world there are concerns about the ethics and legality of how these things are trained. ;)
 
Upvote 0
All of these "AI" systems that are in the news now, whether they generate speech of visual images or whatever, have been trained by web-scraping, and we can be pretty certain that this has been done without regard to consent or copyright
Don't human artists and writers do that too? All art, all writing, is inspired by other art and writing. Even the words I'm typing now, I learned by seeing others use words when I was a toddler.
 
Upvote 0
The question is where is the line? In the case of image tools it's easy: when you can get them to create an image and it includes the watermark from Getty Images it's clear what they have done.

The issue is data scraping. The fact that something can be found on the Internet does not mean that you are free to use it yourself without attribution and for any purpose you like. This is actually a topic of real debate, which the companies of course want to brush over. It makes no difference to me if some database uses my post here as part of a training sample. If I'm an author and my works are used that way it might. This is why I mentioned copyright, because the issue is permission to reuse, and the companies have made no effort to respect that.
 
Upvote 0
The lines are getting blurred more and more each day so there's bound to be a lot of gray area. In terms of the rules here, I think addressing each situation on a case-by-case basis is the only way to go.

That being said, personally, I would encourage our members to use AI (responsibly of course)!

It's here to stay. In a way it's like... embrace it or get left behind. And it's fun!

Property law is so far behind that it's impossible to know what's really "fair use" and what's not these days. Social media is filled with stolen content. Even then, what court and judge are deciding?
 
Upvote 0
I refuse to embrace it, and I don't think it will be a way of life for long. I think it's a dangerous fad. As the lines continue to blur between reality and fiction, more & more people will seek out true human connection and creativity that didn't have the assistance of some AI system. Whether that's opening up old books and reading them, or seeking out writers who actually write their material, humans will want sincerity and authenticity.

As a writer and creator, I simply refuse to use AI to create anything that will have my name attached to it. I've spent a lifetime sharpening my skills as a writer: I'm not about to outsource all that to some computer program that's scraping a million web pages a second to write something for me. It's inevitable that MY creative content will be used by AI without my knowledge or consent - something of which I vehemently disapprove. I foresee a backlash coming against AI within a few years.
 
  • Like
Reactions: PitCarver
Upvote 0
It's amusing to watch companies jump on the bandwagon with no knowledge of what they are doing. In the UK the delivery company DPD recently added generative AI to the chatbot it uses in place of human beings to answer queries. The other days someone who was fed up with the fact that it couldn't answer anything and he couldn't speak to anyone decided to have a little play with it - and got it to compose a couple of poems about how useless it is, to diss the company for how useless they are, and overrode it's "polite and professional" personality just by telling it to start swearing.
Of course it never occurred to the bean-counting numpties whose only interest was that it was cheaper than employing people (whether it could do the job or not) that if you train a "stochastic parrot" on a skim of the internet its training sample will include plenty of examples of people dissing companies, including your company, that it can reply with given the right prompt.
 
Upvote 0
What does this have to do with the line between reality and fiction?
I think at this point any AI I'm aware of is learning from the words of people, or our stories, true or not, with all of the biases of the person writing. The AI is further filling out their worldview from reactions to the responses to the content the AI creates. AI isn't living the experience, learning what makes it happy or sad, failing and feeling failure, or what many would consider reality. I think that may be where The_Chief was coming from.
 
Upvote 0
I think at this point any AI I'm aware of is learning from the words of people, or our stories, true or not, with all of the biases of the person writing.
That could also describe humans.
AI isn't living the experience, learning what makes it happy

That's an advantage of AIs when it comes to discerning truth from falsehood! Humans learn what makes them happy, and what makes them happy is often a lie. AI's don't have happiness.
 
Upvote 0
I think at this point any AI I'm aware of is learning from the words of people, or our stories, true or not, with all of the biases of the person writing. The AI is further filling out their worldview from reactions to the responses to the content the AI creates. AI isn't living the experience, learning what makes it happy or sad, failing and feeling failure, or what many would consider reality. I think that may be where The_Chief was coming from.
Yes, my concern with "blurring the lines between truth and fiction" is the ability of these tools to auto-generate deceptive content, combined with the damaging effects of social media algorithms which already drive division and undermine truth in their quest for "engagement".
That's an advantage of AIs when it comes to discerning truth from falsehood! Humans learn what makes them happy, and what makes them happy is often a lie. AI's don't have happiness.
As I say, "AI" means many things. The AI that automates protein folding simulations and so massively accelerates drug design is not the same AI that writes a set of fake legal precedents for a court case or will generate a pastiche of Macbeth in the style of Donald Trump if you ask it to. But what all of these have in common is that they are applications of syntax rather than semantics, i.e. the machine learning algorithm applies rules without any knowledge of the meaning of what it is doing. It's indifferent to the concepts of "truth" or "falsehood" because it doesn't have any concepts whatsoever: you give it some input and it follows its set of rules until it reaches its stopping condition and then returns its output.

So an "AI" can be good at recognising "truth" if properly trained, and if the data it is assessing lies within the scope of its training. But when faced with something outside of that it can respond inappropriately, e.g. the Uber self-driving car which drove over a woman pushing a bicycle because it hadn't been trained not to. That is a mistake no human driver would make because a human understands that they are driving a car, there's something in front of them and there will be consequences if they just keep driving. The machine understands none of that: it has a stream of data, a set of responses, and a set of rules, and that's it: it does not know what either the data or the responses mean.

But the sort of "AI" that the Chief was referring to, the "generative AI", is utterly useless for discerning truth from falsehood, because that is not its function. It generates text (or images, but in this case text) in response to prompts, based on probability tables derived from large samples of human-generated text and a complex set of rules. So it will tell you that the Earth is flat as easily as it will tell you that the Earth is round, depending on the prompt it is given, because it doesn't understand that its output has any meaning. That is why they so easily "invent" or "hallucinate" falsehoods even when asked to give factual answers. John Searle's analogy of a "Chinese room" is a good description of what it does The only real difference is that the man in Searle's room, following a set of instructions to select the right set of (to him) meaningless symbols to return in response to the symbols passed to him, would surely speculate about what they were doing, whereas the machine cannot.

But if the assertion is that a ML algorithm doesn't have biases, that's clearly not true. There are biases from the training process: the data used and the training criteria applied. And has been shown time and time again, where the machine can "learn" from interaction it can acquire biases (Microsoft's infamous "Tay" chatbot for example: repeating racist tropes didn't make it happy, but a few hours interaction with racists or pranksters were enough to change "its truth"). And yes, when a big enough problem is uncovered the company behind the "AI" will introduce protections, i.e. changing the rules to reduce the probability of unwanted outputs, but that also shows how easily these things can be skewed to produce the "truth" that their creator wants you to hear.
 
Last edited:
Upvote 0

BEST TECH IN 2023

We've been tracking upcoming products and ranking the best tech since 2007. Thanks for trusting our opinion: we get rewarded through affiliate links that earn us a commission and we invite you to learn more about us.

Smartphones