Maybe the NYT’s headline writers’ eyes weren’t that great to begin with?
The tech could represent the end of visual fact — the idea that video could serve as an objective record of reality — as we know it.
We already declared that with the advent of photoshop. I don’t want to downplay the possibility of serious harm being a result of misinformation carried through this medium. People can be dumb. I do want to say the sky isn’t falling. As the slop tsunami hits us we are not required to stand still, throw our hands in the air, and take it. We will develop tools and sensibilities that will help us not to get duped by model mud. We will find ways and institutions to sieve for the nuggets of human content. Not all at once but we will get there.
This is fear mongering masquerading as balanced reporting. And it doesn’t even touch on the precarious financial situations the whole so-called AI bubble economy is in.
Traditional news sources became aggregators of actual news sources and open source Intel, and have made “embellishing” the norm. Stock/reused visuals, speculating minutes into events, etc etc
It is increasingly faked. The right just pretends that means they’re lies that feel “good” are the truth
To no longer be able to trust video evidence is a big deal. Sure the sky isn’t falling, but this is a massive step beyond what Photoshop enabled, and a major powerup for disinformation, which was already winning.
All those tech CEOs met up with Trump makes me think this is a major reason for pouring money in to this technology. Any time Trump says “fake news”, he can just say it is AI.
You couldn’t “trust” video before sora et al. We had all these sightings of aliens and flying saucers - which stopped conveniently having an impact when everybody started carrying cameras around.
There will be a need to verify authenticity and my prediction is that need will be met.
The tech could represent the end of visual fact — the idea that video could serve as an objective record of reality — as we know it.
We already declared that with the advent of photoshop.
I think that this is “video” as in “moving images”. Photoshop isn’t a fantastic tool for fabricating video (though, given enough time and expense, I suppose that it’d be theoretically possible to do it, frame-by-frame). In the past, the limitations of software have made it much harder to doctor up — not impossible, as Hollywood creates imaginary worlds, but much harder, more expensive, and requiring more expertise — to falsify a video of someone than a single still image of them.
I don’t think that this is the “end of truth”. There was a world before photography and audio recordings. We had ways of dealing with that. Like, we’d have reputable organizations whose role it was to send someone to various events to attest to them, and place their reputation at stake. We can, if need be, return to that.
And it may very well be that we can create new forms of recording that are more-difficult to falsify. A while back, to help deal with widespread printing technology making counterfeiting easier, we rolled out holographic images, for example.
I can imagine an Internet-connected camera — as on a cell phone — that sends a hash of the image to a trusted server and obtains a timestamped, cryptographic signature. That doesn’t stop before-the-fact forgeries, but it does deal with things that are fabricated after-the-fact, stuff like this:
Maybe the NYT’s headline writers’ eyes weren’t that great to begin with?
We already declared that with the advent of photoshop. I don’t want to downplay the possibility of serious harm being a result of misinformation carried through this medium. People can be dumb. I do want to say the sky isn’t falling. As the slop tsunami hits us we are not required to stand still, throw our hands in the air, and take it. We will develop tools and sensibilities that will help us not to get duped by model mud. We will find ways and institutions to sieve for the nuggets of human content. Not all at once but we will get there.
This is fear mongering masquerading as balanced reporting. And it doesn’t even touch on the precarious financial situations the whole so-called AI bubble economy is in.
The real danger is the failing trust in traditional news sources and the attack on the truth from the right.
People have been believing what they want regardless of if they see it for a long time and AI will fuel that but is not the root of the problem.
Traditional news sources became aggregators of actual news sources and open source Intel, and have made “embellishing” the norm. Stock/reused visuals, speculating minutes into events, etc etc
It is increasingly faked. The right just pretends that means they’re lies that feel “good” are the truth
To no longer be able to trust video evidence is a big deal. Sure the sky isn’t falling, but this is a massive step beyond what Photoshop enabled, and a major powerup for disinformation, which was already winning.
except that you still can trust video evidence if you examine the video carefully … for now …
But what if your phone comes with nice AI filters? The fake videos get more and more real and the real videos get more and more fake
All those tech CEOs met up with Trump makes me think this is a major reason for pouring money in to this technology. Any time Trump says “fake news”, he can just say it is AI.
You couldn’t “trust” video before sora et al. We had all these sightings of aliens and flying saucers - which stopped conveniently having an impact when everybody started carrying cameras around.
There will be a need to verify authenticity and my prediction is that need will be met.
What you end up stuck doing is deciding to trust particular sources. This makes it a lot harder to establish a shared reality
I think that this is “video” as in “moving images”. Photoshop isn’t a fantastic tool for fabricating video (though, given enough time and expense, I suppose that it’d be theoretically possible to do it, frame-by-frame). In the past, the limitations of software have made it much harder to doctor up — not impossible, as Hollywood creates imaginary worlds, but much harder, more expensive, and requiring more expertise — to falsify a video of someone than a single still image of them.
I don’t think that this is the “end of truth”. There was a world before photography and audio recordings. We had ways of dealing with that. Like, we’d have reputable organizations whose role it was to send someone to various events to attest to them, and place their reputation at stake. We can, if need be, return to that.
And it may very well be that we can create new forms of recording that are more-difficult to falsify. A while back, to help deal with widespread printing technology making counterfeiting easier, we rolled out holographic images, for example.
I can imagine an Internet-connected camera — as on a cell phone — that sends a hash of the image to a trusted server and obtains a timestamped, cryptographic signature. That doesn’t stop before-the-fact forgeries, but it does deal with things that are fabricated after-the-fact, stuff like this:
https://en.wikipedia.org/wiki/Tourist_guy