Kid abuse images gottenridof from AI image-generator training source, scientists state

Kid abuse images gottenridof from AI image-generator training source, scientists state

1 minute, 0 seconds Read

Artificial intelligence scientists stated Friday that they haveactually erased more than 2,000 web links to thought kid sexual abuse images from a dataset utilized to train popular AI image-generator tools

ByMATT O’BRIEN AP innovation author

Artificial intelligence scientists stated Friday they haveactually erased more than 2,000 web links to presumed kid sexual abuse images from a dataset utilized to train popular AI image-generator tools.

The LAION researchstudy dataset is a big index of online images and captions that’s been a source for leading AI image-makers such as Stable Diffusion and Midjourney.

But a report last year by the Stanford Internet Observatory discovered it included links to sexually specific images of kids, contributing to the ease with which some AI tools haveactually been able to produce photorealistic deepfakes that portray kids.

That December report led LAION, which stands for the not-for-profit Large-scale Artificial Intelligence Open Network, to instantly eliminate its dataset. Eight months lateron, LAION stated in a blogsite post that it worked with the Stanford University guarddog group and anti-abuse companies in Canada and the United Kingdom to repair the issue and release a cleaned-up dataset for future AI researchstudy.

Stanford scientist David Thiel, author of the December report, applauded LAION for

Read More.

Similar Posts