Title : Children's books made with Midjourney/Dall-E 2/ChatGPT suffer from a "Whimsy Gap" — they are "moralistic, but not transporting."
link : Children's books made with Midjourney/Dall-E 2/ChatGPT suffer from a "Whimsy Gap" — they are "moralistic, but not transporting."
Children's books made with Midjourney/Dall-E 2/ChatGPT suffer from a "Whimsy Gap" — they are "moralistic, but not transporting."
The "images... are sometimes cute, even beautiful, but somehow off, with distorted proportions or elements of an idea mashed up to discordant effect."
Writes Alyssa Rosenberg in "Why AI will never beat Maurice Sendak" (WaPo). She tested the tools, trying to make children's books.
So far, these tools are limited to the data sets their creators have used to teach them about language and images. If AI is moralistic, limited in imagination or — as in the case of a powerful new avatar generator — inclined toward porn, that’s on the humans.
Oh? I don't know how she's so sure of that, but it sheds some light on an article I saw the other day that said AI wrote better and more "moral" opinions than the Supreme Court.
I can't find this right now. Did the lawprofs who wrote it withdraw it — perhaps because someone pointed out the flaw in their reasoning? The more "moral" opinions were, as you might imagine, more in line with the political left, and the machine may have been fed that point of view.
But I did find this:
ROBOT v. SCOTUS Round 2 (Robot wins again). pic.twitter.com/5sH28Xixk6
— Dorf on Law (@dorfonlaw) December 20, 2022
The guardrails are up! Is this why we can't get the whimsy? Morality must win, and anything off could be an evasion of morality. You can't even have a challenging discussion of morality, because the morality has already been chosen — by the humans — which means, ironically, that the machine is incapable of moral reasoning.
Back to Rosenberg. She asked ChatGPT to write "a story about a boy who is sent to bed without any dinner and imagines escaping to a land of wild things." Here's the story:
"Once upon a time there was a young boy named Peter who was sent to bed without any dinner. His parents had told him he had been naughty and he was too upset to eat. As he lay in bed, he imagined himself escaping to a land of wild things. In this magical place, the trees danced in the wind, the rivers sparkled in the sunlight, and the creatures of the forest sang and played. … He imagined himself making friends with some of the wild creatures and they showed him the secret places they liked to play. As he drifted off to sleep, he was filled with a sense of adventure and joy that he had never felt before."
One of the (many) problems is that this is a summary of a story, not a story. It's too abstract. Much of it is merely restating the question. A real children's book would — at the very least — describe details and convey a sense of experiencing a series of events. The real “Where the Wild Things Are,” by Maurice Sendak, has inventive ideas and language: Max sails "through night and day and in and out of weeks and almost over a year.” The reader is inside his imagination. We're not told that he imagines it. It seems to actually happen.
Perhaps ChatGPT is programmed not to lie. That Max sails in and out of weeks is misinformation.
After some other tests of ChatGPT, Roseberg complains:
Every conclusion has to be a moral. Roguishness is out. Lessons Learned are in.... When I asked ChatGPT about its tendency to sermonize, it responded: “Not every story needs to have a moral or lesson, but many stories do include them. … Morals are often included in stories to help the reader reflect on their own beliefs and values …” blah, blah, blah you get the picture....
Ha ha. Rosenberg says she was "reminded me of tired child prodigies, trotted out to flaunt their brilliance, dutifully reproducing information they don’t understand and making frequent errors as a result."
Rosenberg's conclusion is:
Rather than jailbreaking AI tools to simulate conversations between the rapper Ye and Adolf Hitler, or waiting uneasily for them to become sentient, why don’t we approach them as good parents would — and talk to them, or read to them, the way we do to children? It might be the only chance we have to infuse them with something like a soul.
I don't understand the proposal. What's the difference between "jailbreaking" them and talking/reading to them? Isn't it all a matter of feeding more material into them?
Perhaps "jailbreaking" gestures at the limitations that have been programmed into them — the guardrails preventing wrongthought.
You've got to got to be able to sail out over those guardrails — "through night and day and in and out of weeks and almost over a year" — to find "something like a soul."
Thus articles Children's books made with Midjourney/Dall-E 2/ChatGPT suffer from a "Whimsy Gap" — they are "moralistic, but not transporting."
You now read the article Children's books made with Midjourney/Dall-E 2/ChatGPT suffer from a "Whimsy Gap" — they are "moralistic, but not transporting." with the link address https://usainnew.blogspot.com/2022/12/childrens-books-made-with.html
0 Response to "Children's books made with Midjourney/Dall-E 2/ChatGPT suffer from a "Whimsy Gap" — they are "moralistic, but not transporting.""
Post a Comment