Generative AI is a powerful tool that can be used for a variety of purposes, but it also has limitations. Here are some of the most significant ethical and practical drawbacks of generative AI:
Bias and fairness: Generative AI models are trained on data, and if this data is biased, the model will be biased as well. Models have already been shown to adopt and reinforce negative gender and racial stereotypes.
Limited understanding of the world: Generative AI models do not have a deep understanding of the world. This means that they can sometimes generate content that is nonsensical or unrealistic.
AI hallucinations: Artificial intelligence (AI) hallucinations, also known as AI delusions, are instances where AI systems generate outputs that are false or misleading, presented as facts. One example is the the creation of citations with plausible-sounding authors, titles, and journal names, but which do not correspond to real articles.
Copyright: Generative AI models require large amounts of textual training data, and the makers of models have not been transparent with their sources. It is ethically dubious to use copyrighted works for training without giving the copyright holders opportunities for consent or compensation.
Privacy: Some Generative AI model providers may share inputs and/or used as training data for future model iterations. It is important to read and understand how a platform will use input information before use. WSU Executive Policy 8 prohibits the inclusion of legally protected or regulated data (e.g., proprietary, personally identifiable information, HIPAA, FERPA) in queries provided to generative AI platforms like ChatGPT.
Exploitation: Development of Generative AI systems has relied on human moderation to train models not to produce harmful content. These workers often came from countries with low prevailing wages, worked in poor conditions, and were not given adequate psychological support to counter the damaging effects of frequent exposure to disturbing content.
Imagine you're working on a research project and need to find information about a specific topic. You start by brainstorming some keywords related to your topic, but you're not sure if you've covered all the important ones. AI can be your keyword assistant, helping you find relevant keywords and expand your search to uncover more useful information.
1. Keyword Extraction:
AI can automatically identify key terms and phrases from written text, including research papers, articles, and even your own writing. This saves you time and effort from manually picking out keywords yourself.
Example:
Given a research paper on the impact of climate change on coral reefs, AI could automatically identify key terms and phrases such as:
2. Synonym and Antonym Finder:
AI can help you find synonyms (words with similar meanings) and antonyms (words with opposite meanings) for your keywords. This expands your search scope and helps you find more relevant information.
Example:
For the keyword "global warming," AI could suggest synonyms like "climate change" and "greenhouse effect," as well as antonyms like "global cooling" and "ice age."
3. Topic-Specific Keyword Suggestions:
AI can tailor keyword suggestions to your specific research topic. It takes into account the context of your query and your previous search history to provide keywords that are most relevant to your needs.
Example:
When searching for information about the impacts of climate change on agriculture, AI could suggest keywords like "crop yields," "water scarcity," and "drought adaptation."
Generative AI can supplement workflows in literature reviews by automating steps that currently require much human attention and time. These include things like identifying papers on a topic, classifying papers, summarizing papers, and extracting data. Many of the tools in development require a fee for use, although some offer free trials with limits on time, functions, or amount of use.