Why do you think you're right?

Despite the new EU AI Act, I'm staying low due to technical impossibility.

This law only applies to companies that create AI texts. By contrast, there is no technically possible way for any social media company to connect a user's post to a text generated by an AI company. As best as I can tell, only if a user links directly to the AI output would the warning be readable by the platform.

See also the time delay from passing this law on March 13, 2024:  "The AI Act will enter into force 20 days after publication in the Official Journal (expected in May or June 2024). Most of its provisions will become applicable two years after the AI Act’s entry into force. However, provisions related to prohibited AI systems will apply after six months, while the provisions regarding generative AI will apply after 12 months.

P9_TA(2024)0138
Artificial Intelligence Act
European Parliament legislative resolution of 13 March 2024 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union
Legislative Acts (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD))
CHAPTER IV
TRANSPARENCY OBLIGATIONS FOR PROVIDERS AND DEPLOYERS OF CERTAIN AI SYSTEMS,
Article 50, 2. page 283
Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated.

Files
Why might you be wrong?

Wildcard: This question will be scored yes if any of the cited social media entities simply "announces" plans to label texts as AI-generated. That makes a yes resolution more likely. If they see that labeling AI texts is good for business, they may announce soon, even if they can't figure out how to detect AI-generated texts.

Files
sanyer
made a comment:
Why do you think it is a technical impossibility? If e.g. OpenAI and Meta cooperate, OpenAI could give tools with which to determine whether a given text is generated by an AI, right? (See my forecasts for how this could be done)
Files
cmeinel
made a comment:

Certainly, any given provider of an AI text extrusion device could retrain their model to make certain choices of similar words or phrases obligatory, thus making it easy to detect these as being created by that AI system.  However:

1) That does not eliminate false positives. That means angry users and potentially lawsuits.

2) How do we make all AI systems generate labeled texts? Make it the law? How do nations enforce it? The problem with labeled texts is that these degrade the outputs. Surely there will be users desiring the best outputs. OK, so only users in nations without these laws will get the best outputs. But what then prevents someone from copying non-labeled AI texts and posting them to something viewable in nations with prohibitions?

3)  To be detectable, the technique for labeling AI texts will be impossible to keep secret from those who choose to disable such labeling. A bad actor could write a program to flip the text labeling and posts it to GitHub. Trivial programming, trivial CPU cycles to run.

If anyone wishes, I could go vastly deeper into this issue. My education is BS General Studies with three foci: creative writing, math and systems engineering. MS Industrial engineering -- how massive, complex systems with human participants work together. I've been working with a large language model since 2016.

Files
Plataea479
made a comment:

Technically it seems impossible. But after a Houthi missile strikes an open field near Eilat, I guess I better reconsider impossible. Simply legislating labeling texts does not make it possible.


Files
Files
Tip: Mention someone by typing @username