Learn from Success: 10 Engaging Article Openings
One of the best ways to learn what makes an engaging article opening is to study published nonfiction pieces and to see what made them successful. Let’s do it!
Today, we discuss writers and A.I. Keep in mind, that this is an ever-evolving topic where the ins and outs change quickly, but here are a few things writers should know about A.I. in early 2024.
Throughout this essay, we’ll use the popular term A.I. or artificial intelligence for programs that create images or text based on machine learning. In reality, these programs aren’t actually artificial intelligence.
True A.I. attempts to create programs that can think. Right now, we don’t have any true artificial intelligence programs, though programmers are working toward that goal. According to Ted Chiang: “[We are] a long way off from being able to create a single human-equivalent A.I., let alone billions of them.”
But there are programs that people are calling artificial intelligence, even though they are something very different. The kind of programs we’re talking about (things like Midjourney for art or ChatGPT for text) aren’t trying to imitate thought. They aren’t trying to get computers to think like humans, or even to create like humans. Not really.
These programs today are basically compilers. In the words of author Jason Sanford: “[Machine-learning] programs train on and then generate new versions of what people have already created before.”
In effect, machine-learning uses complex math to analyze massive amounts of data (written text in the case of programs like ChatGPT and images in the case of programs like Midjourney). Then when given a prompt, the program uses this pool of analyzed data to create an approximation of what a human would produce when given the same prompt. The results are rarely difficult to tell apart from a piece produced by a human. That’s not quite true. An artist has no trouble spotting A.I.-created images. A writer, editor, or well-read nonwriter will have no trouble spotting A.I.-created text.
A.I. makes mistakes that humans generally don’t because humans interpret data differently. Humans think. Computers do not. As a result, A.I. art (at least early on) often included obvious errors like too many fingers or nostrils in odd places on faces. And A.I. text is usually stilted and error-ridden. But, as developers tweak programs, the output is getting better. Will it ever mirror the work of a human? Probably not, or not well anyway, for a variety of reasons.
For one, A.I. is only as good as the material used to train it. These programs cannot judge value in material. They can’t tell that a Renoir painting is artistically better than the rather bloated coon dog I painted when I was twelve. If my painting got into the mix, it would be broken down into math for its predictive ability, just like anything else. This is also true of text. When the program is gobbling its way through the Internet, it cannot value the writing of a nuclear physicist over the ramblings of a guy in a tinfoil hat. It doesn’t understand what it’s analyzing. Therefore, A.I. will always produce flawed outcomes. It lacks common sense. It can’t employ critical thinking skills.
That doesn’t mean the creators of these programs aren’t working to reduce the flaws, but they are dealing with computers. And with computers, everything is numbers. You can get the computer to ignore certain things if you can devise a way to do it, but the computer will only be following directions. It won’t be thinking and evaluating. It isn’t actual intelligence. It’s a machine that has analyzed a mass of data and analyzed it to be able to make predictions based on complex math in order to guess what a person might create. Not smart people. Not talented people. It can’t make those kinds of evaluations. It’s simply going for “generic human” and even then, it falls short—a lot.
Another problem is that the massive amount of data needed to train A.I. had to come from somewhere. It was all created by a human. (This is pretty much true at this point, though if A.I. continues to train from things found on the Internet, it will eventually be training based on A.I.-generated art along with human-generated art. That is hardly likely to produce good results.) The humans who created and own the rights to the art/writing were not asked before their materials were used to “train” the programs. Overall, artists and authors have not been pleased, and they have certainly not benefitted from being the unpaid teachers of these programs.
To quote a statement by the Writers Guild of America: “A.I. software does not create anything. It generates a regurgitation of what it’s fed. If it’s been fed both copyright-protected and public domain content, it cannot distinguish between the two. … To the contrary, plagiarism is a feature of the A.I. process.”
Do writers ever use A.I in acceptable ways to help them create? It depends on how you define acceptable. In his ebook, Creativity in the Age of Machine Learning, Jason Sanford said he talked to an author who used Midjourney to help visualize scenes for stories. Another used ChatGPT when brainstorming ideas for stories. And another used A.I. to help create a synopsis of their book for submission. For some, each of these situations would fall under the heading of assisted tech for creators. For others, any use of A.I. is suspicious and unacceptable. Those with this viewpoint tend to believe that since the programs are also being used to create complete works that are then sold or used by companies that might once have employed an artist or writer, the programs should be avoided by all creators.
Thus, writers who use A.I. for any part of their creative process are usually reluctant to talk about it, mostly because the whole process of creating these huge machine-learning programs has been abusive toward artists and writers. Plus, as A.I. writing improves (and it may), it will absolutely be used to replace copywriters wherever possible, putting people out of jobs. Even now, A.I.-created books are popping up on Amazon. And book publishers have used A.I. art for covers (sometimes without realizing it).
Also, short stories and articles created by A.I. have flooded submission systems at magazines and newspapers. As editors don’t have time to weed out these submissions, the logical result is to shut down submissions. That certainly doesn’t help living, human authors, but magazines and newspapers are often on the edge financially already. They cannot afford the extra staff needed to weed out these inappropriate submissions.
An interesting side note on the flood of A.I. stories. Editors say they have no problem spotting stories created by machine-learning programs. This isn’t shocking. People whose job depends upon them evaluating good writing are going to spot the things that make machine-compiled stories awkward and unusable. These kinds of machine-learning programs tend to produce material with awkward language and serious flaws. The producers of these programs know it, though they use romantic terms for these errors, saying the programs “hallucinate” or suffer from “delusions.” This is simply another way to say the materials they produce are flawed and should not be accepted at face value.
Certainly, writers do not benefit from having one of these programs write for them. The writing will have serious problems, and the whole process of producing it depends upon the use of other writers’ works. However, some writers do use these problems for small steps in the process. Some play with programs like these to overcome writer’s block or gain a different possible perspective. In other words, sometimes a program like ChatGPT can be a kind of sounding board. Still, not everyone will be in favor of even this kind of use. Because artists and writers worry about the damage these programs can do to their careers, most shy away from admitting they use it at all.
Right now, it’s difficult to foresee what will happen with these machine-learning programs. Many writers worry about what these programs will mean for their livelihood, especially if the serious flaws in the programs are eventually ironed out. And the legality of the way machine-learning was trained is being tested in the courts.
Right now, public opinion is mixed on whether these programs are a plus or a minus for society. In the meantime, it will be interesting to watch the proceedings. But don’t be sucked in by companies that want you to think of these programs as alive or creative or aware. They’re programs. They don’t dream or hallucinate, though sometimes they’re glitchy, as you might expect with complex programs.
They are meant to be a tool, but the jury is still out as to whether this tool will be a good thing or not. Either way, it’s likely to be something we’re still talking about for the next few years as the software continues to change and improve.
With over 100 books in publication, Jan Fields writes both chapter books for children and mystery novels for adults. She’s also known for a variety of experiences teaching writing, from one session SCBWI events to lengthier Highlights Foundation workshops to these blog posts for the Institute of Children’s Literature. As a former ICL instructor, Jan enjoys equipping writers for success in whatever way she can.
One of the best ways to learn what makes an engaging article opening is to study published nonfiction pieces and to see what made them successful. Let’s do it!
Writing a nonfiction article isn’t easy. However, it may be easier and quicker than writing and selling a picture book and the benefits may surprise you!
Sometimes submissions don’t pay off. Some ideas don’t work like you intend and sometimes they are outright rejected. Here’s how to deal with submission failure.
1000 N. West Street #1200, Wilmington, DE 19801
© 2024 Direct Learning Systems, Inc. All rights reserved.
1000 N. West Street #1200, Wilmington, DE 19801
© 2024 Direct Learning Systems, Inc. All rights reserved.
1000 N. West Street #1200, Wilmington, DE 19801
© 2024 Direct Learning Systems, Inc. All rights reserved.
1000 N. West Street #1200, Wilmington, DE 19801
© 2024 Direct Learning Systems, Inc. All rights reserved.
1000 N. West Street #1200, Wilmington, DE 19801
© 2024 Direct Learning Systems, Inc. All rights reserved.
1000 N. West Street #1200, Wilmington, DE 19801
©2024 Direct Learning Systems, Inc. All rights reserved. Privacy Policy.
2 Comments
Thanks, Jan. As always an excellent article.
I’ve already had a client move to using software to compile drafts based on client interviews. Authors then edit the drafts at a lower fee than given when we didn’t have this tool.