Exploring the Intricacies of Generative AI
ChatGPT is a Large Language Model (LLM) that generates text by predicting words. It has revolutionized the field of AI, but many have concerns about intellectual property, accuracy of information, and its environmental impact.
LLMs as Learning Tools
Many students are now using LLMs like ChatGPT to create study tools, explain concepts, and create practice tests. Professors are rightfully concerned about this because these models can sometimes give inaccurate information, and if you aren’t actually learning anything about the topic, how are you supposed to validate that information? To test the benefits and downsides of using generative AI as a tool for learning, I’m going to try and use ChatGPT to help guide me while building a native plant seed mix for a shady area.
Here was my prompt:
I’m trying to make a seed mix for a partial shade/shade and medium/medium-wet area. I want the mix to have both grasses and flowers and have plants that bloom throughout the entire season. Can you design a seed mix using this list of plants?
It actually did pretty well. It was a lot better than I expected. I think it probably helped that I gave some structure and limitations: I included a list of possible plants which prevents the model from suggesting invasive species. I asked additional follow up questions and it didn’t do as well on specific information about each plant which makes sense because that information is more niche.
Now I’m going to give a similar prompt but without a list of plants:
I’m trying to make a seed mix for a partial shade/shade and medium/medium-wet area. I want the mix to have both grasses and flowers and have plants that bloom throughout the entire season. Can you design a seed mix that is suitable for Minnesota, USDA zone 5a?

I’m pretty impressed, it provided a list of grasses, sedges and forbs that are all native to Minnesota. Among other websites, it sourced Minnesota Wildflowers, which is a great resource for this topic. It also gave suggestions for preparation and first year management of the area. There are some small issues, for example: Carex radiata is not the same as Common Wood Sedge (Carex blanda), and it doesn’t make this clear. However, I think someone trying to actually create a seed mix from these suggestions would be able to pretty easily.
I really wanted to stretch the limits of ChatGPT here, so here was my last prompt:
Create a native seed mix for this area in Minnesota.

I’m thoroughly surprised. I tried to choose an image of my background that had context that would tell a person what type of planting to do. You can see an open area with trees behind it and there is dappled light, not direct sunlight. What you can’t see is that there is a swamp not far behind those trees. It did a lot better than I expected on identifying the type of environment and then giving recommendations on that.
I think AI can be a useful tool in cases where its use is permitted, especially for subjects you already have knowledge about. It is more dangerous when you don’t have pre-existing knowledge about a subject and just use an LLMs answer as gospel.
The Use of Creative Work for Training
I hate that companies are sneaking clauses into their privacy policies allowing anything someone creates using their site to be the websites property. I think people should be compensated for their work being used as training data. However, I don’t think the rise in image generation is going to eliminate the jobs of artists. I think people who buy art understand that it is completely different than something AI generated.
When it comes to changing the system I think lawsuits are one of the best options right now, but unfortunately that is not financially possible for most people. We are also seeing a rise in technologies like Glaze and Nightshade that allow creatives to prevent their work being used as training data. It’s a great solution for the time being, but really only a bandaid because AI companies are already figuring out how to bypass those restrictions.
Next-word Prediction
AI trained to predict the next word has a range of strange behaviors because the models are trained with an enormous amount of data, therefore they are able to pick up on patterns in the data. I guess I don’t really understand the emergent abilities thing very well. My understanding from the article by Schaeffer et. al. is that they are abilities that show up in larger models but not smaller models, but they seem to argue in their paper that this might not actually be a real phenomenon? I would’ve appreciated more examples of these “emergent abilities” other than just zero-shot learning.
Environmental Impact
The environmental impact of AI is one of the parts that I think I diverge from a lot of people because I think the concerns are a bit overblown. I am very concerned about the environment, to the point of being annoying about it. I do think that there are some environmental concerns with AI, but the magnitude of its impact on the environment seems very small compared to a lot of other things like agriculture.
Some people are concerned about how much land that data centers use. I must point out that there are over 2 million acres of golf courses in the United States (and consider how much water they use.) There are real potential benefits to AI, some of which we are already seeing. The only benefit to golf is individual people’s enjoyment. I’m not hating on golfers, but I don’t think it is fair to pick and choose environmental issues we care about because it is trendy. A lot of people are focused on water usage when it is actually quite minimal in comparison to how much water we use to grow crops and water lawns.
I think the real issue is more systemic than just ‘AI bad.’ In America especially, we are used to over consuming resources without caring about how we are impacting other people. Products are increasingly being made that are not meant to last. Apple comes out with a new iPhone every year and many people ditch their old phones for a new one each time. If my Apple AirPods break, I can’t even get them fixed — the Apple Store just gives me a new pair. Modern technology is not made to be upgraded or fixed as much as it is made to be replaced. The biggest problem in my opinion is mining, which is worsened by the fact that we are not good at recycling what we already have used.
Final Thoughts
I do kind of wish this article had been broken down even further, because I feel like each of these topics could have its own article about it. However, for a more general overview of issues surrounding genAI, I enjoyed it.
It left me asking, what are the possible consequences of using ChatGPT as a source? How can people be made aware that LLMs are just prediction engines? I think a lot of people misunderstand LLMs and how they work. I hear a lot of people describe them as ‘thinking’ as if they have consciousness. Helping people understand that LLMs just predict things on a whole lot of data could help them make more informed decisions when using AI for learning things, but I don’t know how to best approach that.