Can generative artificial intelligence (AI) breathe new life into the philosophical inquiry of consciousness?
On Friday, August 18, 2023, we held a special Coffee Hour, during which Maria Falikman and Larry Todd Wilson discussed an innovative AI-assisted Neurophilosophy class experience, where students and educators leveraged the power of machine learning to delve into the profound questions of mind, cognition, and the potential for machine consciousness. To guide learning and to scaffold learning strategies, we used different kinds of prompts the students addressed to Large Language Models (ChatGPT or Google Bard). The prompts involved a variety of perspectives on the topic (psychological, philosophical, neurobiological, evolutionary, and technological) and suggested a variety of outputs (summaries, tables, graphs) to personalize learning. During the Coffee Hour, Maria and Larry share information about our experiences and lessons from this AI-augmented class.
Their presentation was intended to illustrate productive uses of AI chatbots that can be implemented for a wide variety of course content so that you can put the information to use in your classes during the upcoming academic year if you wish.
Here you can access the slides from the presentation and more information on the example from an undergraduate course on neurophilosophy taught by Maria last academic year with support from Larry, who is a management sciences engineer with a background in the use of AI Chatbots.
I got this announcement today, which fits this theme and might be of interest to those following this discussion.
Join us for an insightful exploration of Artificial Intelligence (AI) and generative tools such as ChatGPT with Specialists Elizabeth Plummer, Christina Cavage, and Dongping Zheng. During the session, panelists will address questions concerning the future of TESOL within the realm of AI.
Register today for September 14!
Let’s Stay Connected!
Follow English Language Programs on Facebook, LinkedIn, Instagram, and Twitter for program updates, featured participant content, and more!
Hello everyone! In the search for the new conceptualization of AI/AI or AI\human Subject in learning activities, we can learn from crip theory, queer studies, and indigenous epistemologies regarding blurring the human/non-human boundaries. Donna Haraway’s cyborg manifesto may be a good starting point.
You can use the following link for Haraway’s cyborg manifesto:
https://warwick.ac.uk/fac/arts/english/currentstudents/undergraduate/modules/fictionnownarrativemediaandtheoryinthe21stcentury/manifestly_haraway_—-_a_cyborg_manifesto_science_technology_and_socialist-feminism_in_the_….pdf
https://drive.google.com/file/d/1X7ABWrNqqYHtUk1bYOtxXa1YI1Z_PN1K/view?usp=sharing
If you’d like to experiment, here are links to several options:
Anthropic Claude 2: https://www.anthropic.com/index/claude-2
Google’s Bard: https://bard.google.com/?hl=en
Hugging Face’s HuggingChat: https://huggingface.co/chat/
OpenAI’s ChatGPT: https://openai.com/chatgpt
This is a followup to last week’s Cultural Praxis session on chat bots, and speaks to the human element of any human-programmed technology.
OpenAI’s ChatGPT chatbot tells its users that it doesn’t hold political opinions. But research shows some of its answers have a liberal political bias.
https://wapo.st/3P1zI08
The discussion highlighted several promising possibilities for using large language models to enrich classroom learning:
– LLMs could help students explore concepts from diverse perspectives guided by well-designed prompts and scenarios. This can support critical thinking and metacognition.
– The ability to generate visual knowledge graphs from LLM outputs provides new ways to represent and interconnect concepts learned.
– Scaffolding LLM interactions by vocabulary level can aid comprehension and syllabus coverage.
– Integrating frameworks like Bloom’s Taxonomy could allow personalized and adaptive learning experiences.
However, important concerns and challenges were also raised:
– Verifying accuracy of LLM outputs is crucial but difficult, and students may lack critical assessment skills.
– Small group student collaboration using LLMs needs careful structuring and facilitation.
– There are risks of over-reliance on LLM outputs or viewing them as authoritative.
– The opacity of LLM functioning poses ethical challenges regarding biases and transparency.
– Explicit guidance is needed for students on prompt crafting and productive LLM interactions.
– Teachers play a vital role in designing the learning interactions, assessing LLM outputs, and developing students’ critical abilities.
– More open experimentation and discussion is needed to develop best practices for classroom LLM use.
In summary, attendees felt integrating LLMs into classrooms holds great promise but also poses considerable challenges which warrant ongoing critical exploration and educator development.
Hello everyone. Larry developed a prompt to summarize a transcript of our Coffee Hour discussion, and we collaborated a bit on refining it. I hope it will be useful for the community. You will find it below. As my next post, I will add specifically identified prospects and concerns, which have also been generated using a prompt.
The transcript documents a discussion between educators exploring the potential of large language models (LLMs) like ChatGPT to support teaching and learning. Maria and Larry shared an experiment they conducted using ChatGPT in a university seminar class. They aimed to augment student learning by having them interact with ChatGPT to explore course concepts guided by prompts designed by the instructors. This process was intended to facilitate critical thinking, metacognition, and new approaches to learning.
Other participants like Greg, Mike and Mark raised concerns about verifying the accuracy of LLM outputs and whether students have the necessary skills for critically assessing LLM responses. Mark also asked how LLM capabilities could be scaffolded based on the vocabulary used in a course syllabus.
Peter noted how the visual knowledge graphs generated from LLM outputs represent a form of non-linear “inner speech” by showing interconnections between concepts. Francine highlighted the need to consider creative synthesis when evaluating Bloom’s taxonomy levels reflected in LLM outputs.
Maria emphasized the importance of teaching students methods for crafting effective prompts and critically evaluating LLM outputs. She felt contradictions in the outputs served as valuable cues for stimulating critical reflection. Overall, she felt students needed explicit guidance on productive interactions with AI.
Larry suggested techniques like PDSA cycles could be integrated into learning processes to systematically verify and refine prompts provided to LLM systems. Mike advocated focusing any future discussions on issues of diversity, equality and ethics pertaining to classroom uses of LLMs.
Overall, the participants expressed enthusiasm as well as caution about integrating LLMs into classrooms. They agreed that human teachers play an indispensable role in thoughtfully structuring LLM interactions and developing students’ critical thinking abilities. The attendees felt more open, collective experimentation was essential for guiding the ethical use of LLMs to enrich teaching and learning.
The discussion highlighted the promising possibilities of LLMs to enhance classroom learning by allowing exploration of diverse perspectives, representing concepts visually, and supporting personalized and adaptive experiences. However, challenges were also raised regarding verifying LLM accuracy, facilitating small group work, avoiding overreliance on LLMs, addressing ethical issues, and the vital need to develop students’ critical assessment skills. Further open experimentation and educator development was seen as crucial to realize the potential of LLMs in classrooms while responsibly addressing the challenges.
Yes Larry, what excites me most is the potential for nurturing the unique voices of students in guiding their learning.
Thanks a lot for the interesting talk!
I think one of the possible ways to continue this discussion would be to elaborate on how to combine these tools with other ways of developing critical thinking. There are a lot of ways of running lectures and seminars and to me the most productive one in terms of developing critical and professional thinking (examining the relation between the two is probably a different topic…) is the text-based one: reading groups that presuppose slow critical reading, following author’s argument development etc. Of course, this strategy can not be isolated from the other ones. I also realize that today a lot of teaching is PowerPoint based which has its own advantages.
I was wondering whether the authors have any of their own favorite strategies and in which way those may be combined with AI.
Thank you!
Sophia- It was my impression from the presentation that there are some supplementary ways (I am not sure if these are other apps, but they seem not to come with chatgpt, for example) that can be used to create a critical edge to problem solving. That is why I asked whether, when a chatbot gives wrong information, as was the case in characterization of the different metaphors for imagination, can you frame a prompt to identify the errors and correct them.
Question: How might we combine these (AI) tools with other ways of developing critical thinking?
Response: Combine them now! It is possible to craft a prompt that will guide a learner through the steps of a critical thinking process/technique.
Offer: If you will select a few examples of critical thinking tactics/ways, then I will select one and attempt to deliver a prompt that can be copied and pasted into a model … so that the model will deliver the (critical thinking) steps to the learner. Please feel free to specify the target learner and any other learning objectives.
Hi Sofia! This is really a great question. Slow reading is great, and it is still around (although less and less teachers seem to use it). As I mostly teach cognitive disciplines with a lot of discussions organized around experimental research, my favorite strategy is probably discussing research limitations and discovering potential alternative explanations. I have never tried combining this with LLM, but it might be fun.
Thanks Andy for your comment. My response was related to the broader question raised by Greg. It was not to about the answers(good or bad) from ChatGPT, but more about its reach as a facilitative tool for use by all students. In India, we have a handful of schools that have humanoid robots (assembled in-house) to personalize and enhance learning. This is not even a drop in the ocean considering the numbers enrolled in our schools across the country (over 265 million) needing inclusive attention. This is the burning concern that is haunting.
I attended the session to learn about the potential possibilities afforded by generative AI and explore how I can use it to help rural based teachers and students even within the limited tech resource they have.
In regards to “reach as a facilitative tool for use by all students”, it seems that the lowest-common denominator (‘material’ assets) required for accessing the large language models is: 1) computer or smart phone, 2) modern browser, 3) access to the Internet. Currently, there are several, extensively trained models that are free to access. In addition, there are hundreds of open source models that are free to access. The skills can be taught like other computer skills, but with one significant exception: All forms of instruction is easily personalized!
Don’t forget 4) the right language! (correct me if I’m wrong but the majority of the world’s language users cannot interact with chatbots in their native tongue).
(and if I were being really troublesome I’d add 5) a certain amount of cultural capital.
For example, returning to language, consider the question: do chatbots have a grammatical accent?
I think the answer is pretty clear if I were to ask: do they speak using the grammar of mainstream English or of African-American English (much less Indian English or similar)? Sure you could probably get them to produce answers in AAE, but that’s different from it being the default (and/or perhaps you could train it to produce responses in AAE? someone want to try? I wonder if they’re any good. But regardless of that, the point is that there is a lot of encoded cultural capital of a particular kind that is hidden to users who share that particular cultural capital (i.e., us professor-types)).
Greg, thanks for your observation about ‘getting the language right.’
Regarding your idea for a ‘grammatical accent’ experiment, below is a draft plan to explore ChatGPT’s linguistic and cultural capabilities within a two-hour time frame. If this looks compelling, then please let me know and I will generate a final draft (plan). LT
Two-Hour Linguistic and Cultural Exploration of ChatGPT
1. Research and select linguistic styles** (10 minutes):
– Investigate key phrases or structures characteristic of mainstream English, African-American English, Indian English, etc.
– Select representative sentences or questions for testing.
2. Identify and draft cultural references** (10 minutes):
– Highlight cultural references, idioms, or context-specific situations from diverse backgrounds.
– Formulate prompts rooted in these cultural nuances.
3. Review and finalize prompts** (5 minutes):
– Evaluate the drafted prompts for clarity and relevance.
– Refine as necessary to ensure a clear testing parameter.
4. Skim available ChatGPT documentation** (5 minutes):
– Quickly peruse available ChatGPT guides or FAQs to inform the exploration, especially concerning its training data or recognized biases.
5. Construct a testing template** (5 minutes):
– Design a streamlined template or document to seamlessly input prompts into ChatGPT and document responses.
6. Focused Interaction Session** (25 minutes):
– Input the previously researched linguistic prompts into ChatGPT.
– **Output**: ChatGPT’s responses, illustrating its comprehension and representation of various linguistic styles.
7. Cultural Nuance Quick Test** (15 minutes):
– Implement the crafted cultural prompts.
– **Output**: Insights into ChatGPT’s cultural sensitivity and awareness from its responses.
8. Rapid Analysis** (15 minutes):
– Examine ChatGPT’s answers for linguistic precision, cultural understanding, and any apparent biases.
– **Output**: Detailed observations on its linguistic versatility and cultural perception.
9. Formulate Preliminary Conclusions** (15 minutes):
– Synthesize all gathered insights and observations.
– **Output**: An articulate report highlighting ChatGPT’s strengths and potential areas of improvement in linguistic and cultural contexts.
10. Wrap-Up and Next Steps** (15 minutes):
– Reflect upon the research process and its outcomes.
– Strategize future research directions or more in-depth investigations based on the findings.
Larry, your answer was a bit more complicated than I had imagined and/or I’m confused about the first item.
But no matter, I went and played with ChatGPT3 and asked it to translate a paragraph for an article that I was reading into African-American English.
Here is the original:
“By analyzing the context of saintly devotions in Mexico City from the 1650s to the 1780s, this article makes three specific arguments that revise the accepted wisdom on Guadalupe. First, developments in other saints’ cults, particularly in that of the Virgin of Los Remedios, allowed for Guadalupe’s intervention in 1737. Second, evidence suggests that residents of Mexico City regarded the intercession of Guadalupe in 1737 as more practical and commonplace than unique and mystical. Third, the imperial context of the 1760s to 1780s discouraged new saintly devotions and, therefore, strengthened existing cults like the one to Guadalupe. The story of the Virgin of Guadalupe, as shown through the minutes of the Mexico City council and the Cathedral chapter as well as current-event chroniclers, seems less determined by the triumphant rise of creole consciousness than by such factors as her reputation for miraculous power, changes in the cult of saints in Mexico City, the support of high-ranking men, and excellent timing.”
And here is the ChatGPT translation:
“By breakin’ down the whole scene of folks showin’ love to the saints down in Mexico City from way back in the 1650s to the 1780s, this article drops three big arguments that change up what everybody thought about Guadalupe. First, the way other saints were gettin’ their worship on, especially the Virgin of Los Remedios, made it possible for Guadalupe to step in and do her thing in 1737. Second, the evidence says that folks in Mexico City in 1737 were seein’ Guadalupe’s help as somethin’ regular, not all fancy and mystical. Third, the big-shot rulers from 1760 to 1780 didn’t want new saintly parties poppin’ off, so they made the ones that were already there, like Guadalupe’s, even stronger. The story of the Virgin of Guadalupe, as seen through the records of the Mexico City council and the Cathedral crew, along with what the news reporters at the time were sayin’, looks like it wasn’t just ’bout the rise of local pride, but more about her rep for workin’ miracles, how people in Mexico City were treatin’ their saints, gettin’ support from important folks, and catchin’ the right moment.”
I must say that this is an interesting result, It is perhaps not a fantastic translation (there are lots of words that could have been better translated into AAE better and, as a result, have lost some of their original meaning). But nonetheless, this seems to open up interesting possibilities for the possibility of making knowledge accessible in new ways (fwiw).
Interesting possibilities?
This is a response to Tara Ratnam: Tara, I think Maria and Larry’s object is not so much getting good answers from CHAT-GBT but rather using the answers that are extracted by a prompt as an object for analysis, using, I presume, methods that are used by others in her discipline to study human consciousness. Maria and Larry – did I get that right?
Personally, in reading output from CHAT-GBT I sense there’s a certain something which is missing, like “the point of it all,” but still there is some basis for saying CHAT-GBT is “like consciousness” and can be analysed by the tools used for analysing concsciousness. Is that right?
My reaction is similar to Andy’s. On the one hand, there are all the ways in which chatbots are problematic, both in schools and in society. But chatbots are now widely available and are being widely used inappropriately. So on the other hand, I am very encouraged by Maria and Larry’ approach and the evidence that used correctly, chatbots can be a tool of critical thinking.
Question: “I think Maria and Larry’s object is not so much getting good answers from CHAT-GBT but rather using the answers that are extracted by a prompt as an object for analysis …?”
Response 1: Yes, good answers are sought (and found) and yes, use the answers to augment analysis and actually, many forms of cognitive tasks. I’ve curated a list of 153 cognitive tasks; here are some of the A’s: Abstract, Adopt, Advise, Align, Apply, Appraise, Arrange, Assign, Assimilate, Authenticate, …
Response 2: Another way to view the human-AI interactions is to consider what can be automated, what is advantageous to augment, and what is not applicable to AI.
I share Greg’s concerns. Despite the potential AI holds, how much real difference can it make in addressing issues of equity and justice in education and social life? Digital divide, for example, became very conspicuous during the pandemic and the consequent shift to and reliance on online teaching and learning. It added a new dimension to the challenge of equity in education. While I’m not being cynical, I think the issue of making AI democratically available to all teachers and students needs to figure in discussions on using AI to guide learning.
Dear fellow CHAT-bots,
This prompt-development work described in the presentation is interesting as a way to teach students how to ask good questions of chatbots and hence how to use this new tool! Very interesting work Dr. Falikman! (and I’d love to hear more about your and your students’ experiences).
But I have a slightly broader question about cultural-historical context of AI. My big question is: What has the AI chatbot been developed for? and/or what will it be most used for? Here I have in mind an asking after what Bateson spoke of as “the organism-in-the-environment” or, perhaps, “the tool-in-the-environment”. I’m curious about power relations here and what some call neo-liberalism and some call capitalism. It seems that this is a tool (and yes, I call it a tool, even if, following Latour, I believe we can speak of the agency of tools) that will be very productive for further exploitation and extraction of capital from the masses. I suspect that it will be a tool that will further socioeconomic divisions and will, for example, be extremely productive for “elite” students at “elite” universities (and many students at elite universities have already testified of how chatbots have written “A” papers for them). Put slightly differently, I suspect that Chatbots will be social capital multipliers for those who already have lots of social capital.
Too cynical? I’m happy to have someone convince me of the glorious promises that the AI pushers are promoting.
-greg
Would chatbots be of any use in answering questions that you raise, Greg?
I would love a tutorial where I could make use of a chatbot, something I have never done. I imagine using it in the way I use my browser: to help me in my projects in the world. I imagine sharing the products of chatbots as I engage in dialog with people.
Hi. Please suggest several candidate examples of ‘something you’ve never done’ and I will select one and attempt to write a prompt that you can copy and paste into model (to evoke the learning/experience).
Great! Thank you for responding Larry!
Here are two questions that I would like to see a chatbot respond to the following questions in 250 words:
1) What are the main differences between Chomsky’s generative grammar and Langacker’s Cogntive Grammar. 500 words.
2) For Lev Vygotsky, what is the relationship between thought and language?
Whoops! I meant 250 words!
Thanks for the assignment! This weekend, I will write a prompt to deliver the responses. 🙂
Here is the prompt.
Assistant, here are your instructions.
The goal of this session is to closely examine two theories, generative grammar and cognitive grammar, and generate a list of differences.
To fulfill the goal, follow these steps:
1) Study the excerpts from Wikipedia.
2) Compare and contrast Chomsky’s generative grammar and Langacker’s cognitive grammar.
3) Refer to the following excerpts as input.
Wikipedia excerpts:
Generative grammar – Generative grammar, or generativism /ˈdʒɛnərətɪvɪzəm/,[3] is a linguistic theory that regards linguistics as the study of a hypothesised innate grammatical structure.[4] It is a biological[5] or biologistic[6] modification of earlier structuralist theories of linguistics, deriving ultimately from glossematics.[7][8][a] Generative grammar considers grammar as a system of rules that generates exactly those combinations of words that form grammatical sentences in a given language. It is a system of explicit rules that may apply repeatedly to generate an indefinite number of sentences which can be as long as one wants them to be. The difference from structural and functional models is that the object is base-generated within the verb phrase in generative grammar.[9][10] This purportedly cognitive structure is thought of as being a part of a universal grammar, a syntactic structure which is caused by a genetic mutation in humans.[11] Generativists have created numerous theories to make the NP VP (NP) analysis work in natural language description. That is, the subject and the verb phrase appearing as independent constituents, and the object placed within the verb phrase. A main point of interest remains in how to appropriately analyse Wh-movement and other cases where the subject appears to separate the verb from the object.[12] Although claimed by generativists as a cognitively real structure, neuroscience has found no evidence for it.[13][14] In other words, generative grammar encompasses proposed models of linguistic cognition; but there is still no specific indication that these are quite correct. Recent arguments have been made that the success of large language models undermine key claims of generative syntax because they are based on markedly different assumptions, including gradient probability and memorized constructions, and out-perform generative theories both in syntactic structure and in integration with cognition and neuroscience.[1
Cognitive grammar – Cognitive grammar is a cognitive approach to language developed by Ronald Langacker, which hypothesizes that grammar, semantics, and lexicon exist on a continuum instead of as separate processes altogether.[1] This approach to language was one of the first projects of cognitive linguistics.[2] In this system, grammar is not a formal system operating independently of meaning. Rather, grammar is itself meaningful and inextricable from semantics. Construction grammar is a similar focus of cognitive approaches to grammar.[3] While cognitive grammar emphasizes the study of the cognitive principles that give rise to linguistic organization, construction grammar aims to provide a more descriptively and formally detailed account of the linguistic units that comprise a particular language.[3] Langacker first explicates the system of cognitive grammar in his seminal, two-volume work Foundations of Cognitive Grammar.[4] Volume one is titled “Theoretical Prerequisites”, and it explores Langacker’s hypothesis that grammar may be deconstructed into patterns that come together in order to represent concepts. This volume concentrates on the broad scope of language especially in terms of the relationship between grammar and semantics.[1] Volume two is titled “Descriptive Application”, as it moves beyond the first volume to elaborate on the ways in which Langacker’s previously described theories may be applied. Langacker invites his reader to utilize the tools presented in the first volume of Foundations in a wide range of, mainly English, grammatical situations.[5]
Generate this output: A list of the main differences between Chomsky’s generative grammar and Langacker’s cognitive grammar.
Ensure that the output has this characteristic: The explanation should not exceed 250 words.
Please provide a justification about how the outputs were produced.
Here is the reply from GPT-4.
**Differences between Chomsky’s Generative Grammar and Langacker’s Cognitive Grammar**:
1. **Foundation and Object of Study**:
– **Generative Grammar**: Regards linguistics as the study of a hypothesized innate grammatical structure. It considers grammar as a system of rules to generate grammatical sentences in a language.
– **Cognitive Grammar**: Regards grammar, semantics, and lexicon as existing on a continuum. It emphasizes the cognitive principles behind linguistic organization, suggesting that grammar is not independent of meaning.
2. **Approach**:
– **Generative Grammar**: Considers grammar as a biological or biologistic modification of structuralist theories, being a part of a universal grammar due to a genetic mutation.
– **Cognitive Grammar**: Treats grammar as meaningful and intertwined with semantics, suggesting that grammar isn’t a separate formal system.
3. **Structure and Representation**:
– **Generative Grammar**: Focused on rules that can be applied repeatedly, with the object base-generated within the verb phrase, and interest in structures like the NP VP (NP) analysis.
– **Cognitive Grammar**: Proposes that grammar can be deconstructed into patterns representing concepts, emphasizing the relationship between grammar and semantics.
4. **Cognitive Reality**:
– **Generative Grammar**: Claims an innate grammatical structure as cognitively real, though neuroscience hasn’t found definitive evidence for this.
– **Cognitive Grammar**: Grounded in cognitive processes, suggesting that linguistic organization arises from cognitive principles.
5. **Derivation**:
– **Generative Grammar**: Derives from earlier structuralist theories and glossematics.
– **Cognitive Grammar**: One of the foundational projects of cognitive linguistics, closely related to construction grammar.
**Justification**:
The outputs were produced based on a careful reading of the provided Wikipedia excerpts. By examining the definitions and characteristics of each theory, I isolated distinct features and drew contrasts between them. The distinctions highlighted their foundational assumptions, their approach to linguistics, the importance they place on structure and representation, their relationship with cognitive processes, and their theoretical derivations.
Thank you very much, Larry, for posing prompts to distinguish generative grammar (Chomsky) from Cognitive Grammar (Langacker). It appears you restricted the source material to Wikipedia articles. Is that true. Also, did you pose the prompt to both ChatGPT and Chat4? Is Claude yet another chatbot? My questions reveal how ignorant I am of the terrain.
Here is the reply from Claude.
Here is a 233 word list of 5 key differences between generative grammar and cognitive grammar:
Generative grammar views grammar as an innate, biological structure whereas cognitive grammar sees grammar, semantics, and lexicon as existing on a continuum.
Generative grammar hypothesizes that there is a universal grammar caused by a genetic mutation. Cognitive grammar does not make claims about universal grammar or genetic causes.
Generative grammar aims to formally describe syntactic structures and word order rules. Cognitive grammar focuses on how grammar and meaning are interconnected.
Generative grammar proposes that sentences are generated through the application of explicit syntactic rules. Cognitive grammar sees grammar as meaningful patterns rather than explicit generative rules.
Generative grammar analyzes sentences in terms of syntactic tree structures with a subject-verb-object order. Cognitive grammar does not use syntactic trees and argues against rigid subject-verb-object divisions.
To generate this list, I carefully studied and compared the key points about generative and cognitive grammar from the Wikipedia excerpts you provided. I identified 5 high-level differences between the two theories and summarized them concisely in a bulleted list, keeping the explanation under 250 words as requested. The summaries draw directly from the details and contrasts described in the excerpted texts.
Thank you Greg!
First, about our experiences. It was tremendously interesting for me to collaborate on this project with a real knowledge engineer, observing the technology (or art?) of prompt engineering in action. Students had fun interacting with ChatGPT and comparing various outputs (summaries vs. tables vs. graphs), but what was really insightful for them and produced a number of “aha!” responses was an opportunity to discover that slight changes in prompts provided for noticeably different outputs. This led them to further understanding that ChatGPT could be a great tool if you know how to use it. Also, it was an amazing chance for all of us to revisit and to integrate what we had discussed during the semester.
Second, about the ultimate goal of LLM. I am not a great visionary, but to me it still looks like a powerful “cognitive extension”, so to say. Well, of course, one does need what to extend. In this sense, increased divisions are quite plausible.
This is the process that I want to understand, Maria : the technology (or art?) of prompt engineering
There are already some publications in the area of AI on it, e.g. https://arxiv.org/abs/2305.11430
To me, it is linked to the “Theory of [a chatbot’s] Mind”, i.e. to the understanding of how LLM work and what they learn while being trained, but just as 40 years ago, it seems to be murky.
@Maria, Thanks for this reply! I really love this idea of discovering how slight changes in prompts provided for noticeably different outputs. I wonder if you could give an example or two of this?
This points to my general interest (similar to Mike’s response in this sub-thread) regarding a “feel for the chatbot”. It seems like students are building an intuitive sense of how the chatbot thinks. Perhaps we might go so far as to say that they are getting a feel for the chatbot’s subjectivity (even though I insisted above on calling it a tool!).
Regarding “cognitive extension”, sure, I can agree wholeheartedly with that. But the key questions are: cognitive extension for whom? and for what purposes?
The cynic in me imagines that this will powerfully extend the cognition of elites and those who are trying to sell us things that we don’t need while doing quite a bit less for the currently “underprivileged” and variously dispossessed people.
A good friend of mine, James Evans (Sociology, University of Chicago), works on AI and spins a rather optimistic story about how they could be used to solve many of the world’s problems (but he also likes to talk about how much he appreciates that Facebook gives him useful ideas for buying things, including gifts for friends (note the charitable spin)). I’m not convinced.
Anyway, I’m happy to drop that part of the thread since I think that ends in an “agree to disagree” stance. I’m much more interested in the data and what your students learned about the chatbot!
Please ask questions and make suggestions for how to follow up on this important topic
I am wondering how we might use a chatbot to prepare us for upcoming presentations.
Sure! I’ll create a prompt to personalize the steps of preparing a presentation. What kinds of presentation materials do you typically use for your talks? That will help me specify the right outputs like slides, handouts, or other formats that work best for you.