Mellichamp Initiative in Mind & Machine Intelligence Summit 2024

Date Start
Date End
April 19, 2024
Location
Henley Hall 1010

AI and Human Creativity

2024 Summit Graphic

 Free Event, Registration Required: forms.gle/ABnw94a5qoBYa6SW9

 

Overview

AI’s capabilities to create visual art, music, stories, and videos are improving exponentially. AI also can teach innovative new strategies, unknown to humans, in games such as chess or Go and promises to revolutionize human problem-solving. These advances motivate important questions relating to AI and Human Creativity.

How does AI-generated creativity differ from the human creative process? Where does generative AI fall below humans and where does it exceed human abilities? How can AI be used to potentiate human creativity? Will the massification of generative AI stiffen human creativity? What is the future role of the artist with the proliferation of generative AI? What are the legal framework and challenges in protecting artists’ copyrights?

Join us at the 2024 Mellichamp Mind and Machine Intelligence Annual Summit, where leaders from Art, Music, Computer Science, Literature, Psychology, and Philosophy converge. Over two days (April 18th and 19th, 2024), we’ll explore examples of using AI in the creative process, discuss these pressing questions, and ignite debates around the interplay of AI and Human Creativity.

 

 Program

 

April 18th

8:30am Welcome and Opening Remarks

9:00am Session 1: Creative Dialogues: AI, Artists, and Architects in Conversation

11:30am Session 2: How AI Alters What We Say and See

2:30pm Session 3: Understanding Human Creativity

4:00pm Keynote Lecture: Ahmed Elgammal, Rutgers University "Art in the Age of AI"

 

April 19th

9:00am Session 1: New Ideas of Creativity and Culture

11:30am Session 2: Security and Law in a World of GPT and Generative AI

Detailed Program

 

 Organizers

AI/Human Creativity Contest for Undergraduate and Graduate Students

Call for Submissions

 Speakers

 

Ahmed Elgammal

Ahmed Elgammal
Professor of Computer Science
Rutgers University
"Art in the Age of AI"

Abstract: Art creation and appreciation are hallmarks of human intelligence. Recently, Artificial Intelligence has made groundbreaking strides into the artistic domain, revolutionizing both the analysis and synthesis of art. This transformative impact is evident across various genres, including visual arts and music. In my talk, I will provide a comprehensive overview of AI's latest advancements in art. More importantly, I will delve into how these advancements are reshaping our understanding of creativity and the subjective human experience. Additionally, I will address the emerging ethical considerations that accompany the integration of AI in art, highlighting both the challenges and opportunities this fusion presents.

Bio: Dr. Ahmed Elgammal is a professor at the Department of Computer Science and an Executive Council Faculty at the Center for Cognitive Science at Rutgers University. He is the founder and director of the Art and Artificial Intelligence Laboratory at Rutgers. Dr. Elgammal has published over 200 peer-reviewed papers, book chapters, and books in the fields of computer vision, machine learning and digital humanities. Dr. Elgammal research on knowledge discovery in art history, AI-art generation, and AI-based art authentication received global media attention, including several reports on the Washington Post, New York Times, CNN, NBC, CBS News, Science News, and many others. In 2016, a TV segment about his research, produced for PBS, won an Emmy award. In 2017, he developed AICAN, an autonomous AI artist and collaborative creative partner, which was acclaimed in an Artsy editorial as “the biggest artistic achievement of the year.” AICAN art has been exhibited in several art venues in Los Angeles, Frankfurt, San Francisco, New York City and the National Museum of China. In 2021, he led the AI team that completed Beethoven’s 10th symphony, which received worldwide media coverage. He received M.Sc. and Ph.D. degrees in computer science from the University of Maryland, College Park, in 2000 and 2002, respectively.

 

Lai-Tze Fan

Lai-Tze Fan
Assistant Professor of Sociology & Legal Studies and English Literature
University of Waterloo
"AI, Creativity, and Access"

Abstract: In this talk, I will explore the asymmetry of access and accessibility in commercialized computational platforms, including those used for creative purposes. As more and more platforms restrict users’ and third-party developers’ access to code, databases, and application programming interfaces, questions of access and transparency are imperitive because many of the platforms and tools that are used in creative coding, creative digital art, and creative electronic literature variously prohibit access in regard to who gets to use them.

Focusing on the field of electronic literature, I discuss e-literature works written in the last two years that use automated and machine learning tools, including John Cayley's performance The Listeners, Jhave Johnston’s AI-generated poem ReRites, and Sasha Stiles’s automated system Technelegy. Behind each of these cutting-edge works of creative writing is its platform that enabled its creation. In so far as these platforms and tools provide the parameters for creative possibility, we should know what their limitations are. We should want to know their–and I use this word on purpose–blindspots, where they have limited accommodations and accessibility, as well who gets to define and change these valuations of access and accessibility. How much of our creative capacity in e-literature, as well as our possibilities for access and accessibility, are determined by the design of the platforms and tools that we use?

Bio: Lai-Tze Fan is the Canada Research Chair of Technology and Social Change, and an Assistant Professor of Sociology & Legal Studies and English Literature at the University of Waterloo, Canada. She is Associate Professor II at the Center for Digital Narrative in the University of Bergen, Norway, and the Founder and Director of the U&AI Lab at Waterloo, which uses arts-based methods for enhanced EDI outcomes in AI design. She is on the Board of Directors of Waterloo’s TRuST scholarly network, targeting misinformation and public trust in AI. Fan’s work focuses on systemic inequities in technological design and labour, digital storytelling, research-creation, critical making, and media theory and infrastructure. Fan serves as an Editor and the Director of Communications of electronic book review and an Editor of the digital review.

 

 

Adam Green

Adam Green
Provost's Distinguished Associate Professor of Psychology 
Georgetown University
"Action Potentials and Assessing Potential: A Brief Tour of the Neuroscience of Creativity and New Avenues for AI Modeling of Creativity in College Admissions"

Abstract: This talk is about two things. First, I’ll give a very brief overview of some important developments in the cognitive neuroscience of creativity. I’ll focus on neuroimaging evidence regarding connectivity between the brain’s so-called “default mode” network and the frontoparietal control network. I’ll also briefly note outstanding questions and a broader set of findings, including electrical neuromodulation of creative cognition, that point to the multifaceted nature of creativity in the brain. Second, I’ll describe recent research in a massive-scale data set that is investigating the capacity of large language models and other computational approaches to assess creativity in college applicants. This research shows strong prediction of future college GPA from computational metrics derived from applicant’s personal statements. This predictiveness holds after controlling for race and SAT score and combining the creativity metric with SAT yields stronger prediction than either metric alone. Notably, the creativity metric is 15 times less associated with race than SAT scores are, and combining the creativity metric with SAT scores shows potential to decrease disparity while boosting prediction of future academic achievement.

Bio: Adam is a Professor in the department of Psychology and Interdisciplinary Program in Neuroscience at Georgetown, and director of the Laboratory for Relational Cognition. Adam's motivating interest is in human creative intelligence and especially in understanding how neural processes constitute our best ideas. Adam’s work includes research into endogenous neural mechanisms and exogenous neuromodulation that support creative relational reasoning, as well as research on the neuroscience of teaching and learning in real-world classrooms, and research that integrates creativity assessment and training into educational contexts. He is a founder and past President of The Society for the Neuroscience of Creativity, and Editor-In-Chief of Creativity Research Journal.

 

 

Wenbo Guo

Wenbo Guo
Assistant Professor of Computer Science
University of California, Santa Barbara
"Understanding and Harnessing Reinforcement Learning for Security Purposes"

Abstract: In this talk, I will introduce our recent endeavors in understanding the decision-making process of reinforcement learning and harnessing reinforcement learning for different security applications. In particular, I will start with our work in explaining reinforcement learning through statistical modeling and demonstrate the utility of our explanation in understanding and fixing model errors. Then, I will demonstrate the usage of reinforcement learning in program fuzzing, a classical software security problem, and large language model jailbreaking, a newly emerging security threat.

Bio: Wenbo Guo is an assistant professor at the UCSB CS department. His research interests are machine learning and cybersecurity. His recent research includes designing foundation models for software and network security problems, building reinforcement learning-driven planning and scheduling systems for security problems, and improving the explainability and robustness of large models and reinforcement learning. He is a recipient of the IBM Ph.D. Fellowship (2020-2022), Facebook/Baidu Ph.D. Fellowship Finalist (2020), and ACM CCS Outstanding Paper Award (2018). His research has been featured by multiple mainstream media and has appeared in a diverse set of top-tier venues in security and machine learning. Going beyond academic research, he also actively participates in many world-class cybersecurity competitions, including the AIxCC competition as part of the Shellphish team.

 

 

Daphne Ippolito

Daphne Ippolito
Assistant Professor, Language Technologies Institute
Carnegie Mellon University
"Creative Writing with an AI-Powered Writing Assistant: Perspectives from Professional Writers"

Abstract: In this talk, I will describe Wordcraft, a text editor we built in 2021 with the goal of making it easier for creative writers to collaborate with a generative language model to write stories. Later that year, we ran the Wordcraft Writers Workshop, in which we commissioned 13 professional writers from a diverse set of creative writing backgrounds to craft stories using the tool. Drawing from interviews and the participants' journals, I will describe both what Wordcraft did well and also the ways in which it failed to meet the expectations and goals of participants. Finally, I will give a 2024 perspective---three years later, language models have gotten much better at being coherent and conversational, but many of the challenges observed by our workshop's participants continue to limit the effective deployment of LMs in creative writing applications.

Bio: Daphne Ippolito is an assistant professor at Carnegie Mellon University. She studies the tradeoffs and limitations of generating text with language models, as well as strategies for evaluating natural language generation systems. She also researches how to incorporate AI-in-the-loop language generation into assistive tools for writers. I completed my PhD at University of Pennsylvania, co-advised by Professor Chris Callison-Burch at UPenn and Principal Scientist Douglas Eck at Google Brain.

 

 

 

 

Daniel Levitin

Daniel Levitin
James McGill Professor Emeritus of Psychology and Neuroscience
McGill University
"Creativity in Music: Evolution or Revolution?"

Abstract: Works of art that we judge to be the most creative often involve the artists working under constraints to produce something novel, or something that pushes the edges of these assumed constraints. Some of the most treasured music has come to exist not in result of revolution, but by way of evolution. It’s not really true invention, but a wide blending of previous work. Mozart didn’t invent the symphony or the sonata—what Mozart is recognized for is his ability to work within the tight constraints provided, and yet still be able to come up with such ground-breaking musical statements. I’ll illustrate using vivid musical examples that range from Beethoven to Boogie-woogie, and from Ike Turner to The Beatles. There are apt parallels to innovation in science, engineering, literature, and business. Through this lens, we can newly appreciate the importance of a liberal arts education as the foundation of innovation. Mozart, Beethoven, Ike Turner, and The Beatles were walking encyclopedias of the music that came before them, able to accomplish greatness because of their deep understanding of the history of creativity.

Bio: Dr. Daniel Levitin is James McGill Professor Emeritus of Psychology and Neuroscience at McGill University, and Founding Dean of Minerva University in San Francisco. His research addresses fundamental questions in auditory memory, musical structure, and the neuroanatomy and neurochemistry of musical experience. He has published 75 peer-reviewed articles in journals such as Science, Nature, PNAS, Neuron, and Cognition.

He earned his B.A. from Stanford University, and his Ph.D. in Psychology at the University of Oregon. He completed post-doctoral training at the Stanford University Medical School and at UC Berkeley.

In his spare time, he writes about health, science and medicine for The New Yorker, The Atlantic, and The New York Times, and appears regularly on NPR. He is the author of five consecutive bestselling books: This Is Your Brain On Music , The World in Six Songs, The Organized Mind, Successful Aging, and A Field Guide to Lies. His forthcoming book, I HEARD THERE WAS A SECRET CHORD: Music As Medicine, will be available this August.

As a musician (saxophone, guitar, vocals, and bass), he has performed with Mel Tormé, Bobby McFerrin, Rosanne Cash, Sting, Renée Fleming, Victor Wooten, Neil Young, and David Byrne. He has produced and consulted on albums by Stevie Wonder, Steely Dan, Joni Mitchell, and has been awarded 17 gold and platinum records.

Image courtesy of Rose Eichenbaum

 

 

Fabian Offert

Fabian Offert
Assistant Professor for the History and Theory of the Digital Humanities
University of California, Santa Barbara
"Machine Visual Culture"

Abstract: Computer vision models are trained on huge scrapes of internet culture, extracting and fossilizing many parts of the Western visual canon. A "machine visual culture" thus invisibly determines the space of visual possibilities in all aspects of digital life. My talk will argue that understanding the conceptual logic of this machine visual culture is not only one of the most powerful ways of probing the ideology of artificial intelligence, it also allows us to rethink the notion of visual culture itself.

Bio: Fabian Offert is Assistant Professor for the History and Theory of the Digital Humanities at the University of California, Santa Barbara. His research and teaching focuses on the visual digital humanities, with a special interest in the epistemology and aesthetics of computer vision and machine learning. His current book project focuses on "Machine Visual Culture" in the age of foundation models. Fabian is also principal investigator of the international research project "AI Forensics" (2022-25), funded by the Volkswagen Foundation, and was principal investigator of the UCHRI multi campus research group "Critical Machine Learning Studies" (2021-22). Before joining the faculty at UCSB, he served as postdoctoral researcher in the German Research Foundation’s special interest group "The Digital Image", associated researcher in the Critical Artificial Intelligence Group (KIM) at Karlsruhe University of Arts and Design, and Assistant Curator at ZKM Karlsruhe, Germany.

 

Rita Raley

Rita Raley
Professor of English
University of California, Santa Barbara
New from template: "Creativity in the Age of AI"

Bio: Rita Raley is Professor of English at UC Santa Barbara. Her work focuses on digital media in relation to contemporary literary, artistic, cultural, and linguistic practices. Most recently, this has led to publications focusing on GPT-2, generative AI, and what people are doing with large language models. Her collaborative projects include a forthcoming PMLA cluster prompted by LLMs, as well as work toward Critical Machine Learning and “Critical AI.” In addition to previous teaching positions at the University of Minnesota, Rice, and NYU, she has held fellowship and short-term residency appointments hosted by the National Humanities Center; the University of Bergen, Norway; the Dutch Foundation for Literature in Amsterdam; and UCLA, the last as part of a Mellon-funded project on the Digital Humanities.

 

 

Pamela Samuelson

Pamela Samuelson
Richard M. Sherman ’74 Distinguished Professor of Law
University of California at Berkeley
"How Copyright Law Conceptualizes Creative Expression"

Abstract: Since the mid-1960s copyright professionals have debated about whether computer generated outputs of texts, images, and the like should be eligible for copyright protection as creative expressions and if so, who would be the outputs' authors. In the last year or so the U.S. Copyright Office has decided that outputs of generative AI systems should be ineligible for copyright protections because they are not expressions of human authors, as they think the law requires. The Office has asked for public comments on its policy position and is likely to produce a report later this year to discuss various arguments for and against its current policy. Having written an article in 1985 on this topic, I can offer an historical perspective on how the debate over this issue has evolved since the 1960s.  

Bio: Pamela Samuelson is the Richard M. Sherman ’74 Distinguished Professor of Law at the University of California at Berkeley and a Co-Director of the Berkeley Center for Law & Technology. She has written and spoken extensively about the challenges that new information technologies pose for traditional legal regimes, especially for intellectual property law. She is co- founder and president of Authors Alliance. She is a member of the American Academy of Arts & Sciences, a Fellow of the Association for Computing Machinery (ACM), a Contributing Editor of Communications of the ACM, a past Fellow of the John D. & Catherine T. MacArthur Foundation, and an Honorary Professor of the University of Amsterdam. She is also Chair of the Board of Directors of the Electronic Frontier Foundation. She joined the Berkeley faculty in 1996 after serving as a professor at the University of Pittsburgh Law School. She has also visited at Columbia, Cornell, Fordham, Harvard, and NYU Law Schools.

 

Antonio Somaini

Antonio Somaini
Professor of Film, Media, and Visual Culture Theory
Université Sorbonne Nouvelle
"The Visible and the Sayable. On Generative AI, Images and Words"

Abstract: One of the most striking features of the current impact of AI technologies on images and visual cultures is the fact that these technologies are profoundly reorganizing the relations between images and words. The recent wave of text-to-image and text-to-video models, in particular, is leading us towards a new cultural landscape in which what is visible depends more and more from what is sayable. With prompts, which operate like search queries exploring the latent spaces produced by the various Generative AI models, language becomes a new medium for image production, in a completely unprecedented way. Prompts act as a new kind of "speech acts" and as a new form of "remediation" which activates features of previous visual media by using the terms that have been historically associated with them.

The talk will also analyze how this profound reorganization of the relations between images and words is tackled by contemporary artists such as Erik Bullot, Grégory Chatonsky, and others.

Bio: Antonio Somaini is Professor of Film, Media, and Visual Culture Theory at the Université Sorbonne Nouvelle in Paris, and a Senior Member of the Institut Universitaire de France (IUF). His current research project deals with the impact of AI technologies on images and visual culture. Among his recent publications, the article "Algorithmic Images. Artificial Intelligence and Visual Culture" (Grey Room, 93, Fall 2023) and the book Culture visuelle. Images, regards, médias, dispositifs [Visual Culture. Images, Gazes, Media, Dispositives] (with Andrea Pinotti, Les Presses du Réel, 2022). In 2020 he has curated the exhibition Time Machine: Cinematic Temporalities in the Italian city of Parma (catalogue published by Skira, website www.timemachineexhibition.com). He is currently curating an exhibition on AI and contemporary art at the Jeu de Paume museum in Paris entitled Latent Spaces. The World Through AI.

 

Davor Vincze

Davor Vincze
Composer, Postdoctoral Researcher
Hong Kong Baptist University
"Virtual Voices"

Abstract: Creating a large AI model requires a lot of data and data mining, which is costly. Logically, the companies that invest time and money in developing creative tools for music or art want to get a return on that investment. Consequently, the tools that come to market are often developed with a clear and limited goal in mind, with a tendency to create ever more of the art that already exists.

As a composer/artist working in the field of contemporary music and exploring emerging technologies for music production, I am curious about how I can incorporate these tools into my own artworks. In my experience, it is often the case that, when a machine learning tool fails to do what it is supposed to do, we start to perceive a "virtual voice" behind the machine. It is precisely in these unexpected/queer outcomes, which lie outside our usual thought process (often bound by the laws of physics or genetics and anchored in historical or socio-cultural realities), that I see the greatest potential for innovative creative practices.

Bio: Davor Vincze is a composer of contemporary music whose artistic focus lies on meta-reality and musical mosaicking. Inspired by technology and science fiction, he searches for hidden acoustic spaces or ways to blur the real and the imaginary, often using electronics and AI tools. Working with mosaics (multiple copies of fragmented sound gestures), using technique he calls ‘microllage’, Vincze searches for fluid sounds that allow for non-binary, ambiguous, or "androgynous" interpretations. After completing his composition studies in Graz and Stuttgart, Vincze specialized in electronic music at the Ircam and finally completed his doctorate at Stanford University. His compositions have been performed by renowned international musicians like Ensembles Modern, Recherche, and Intercontemporain, Klangforum Wien, Talea and Slagwerk den Haag, JACK and Del Sol Quartet, Secession and No Borders Orchestra. Slovene and Zagreb Philharmonic etc, at festivals such as Présences, Impuls, MATA, Manifeste, Darmstadt, Zagreb Biennale and others. In 2023, he completed his Arts Fellowship at Emory University in Atlanta. In 2014, Vincze founded an international festival for contemporary music - Novalis. Since 2023 he has been co-director of the Music Biennale Zagreb.

Vincze's works are published by Maison ONA in Paris. Vincze won Alain Louvier Prize, Stuttgart Composition Competition and 2nd prize at the Pre-Art Composition competition, Impuls Festival competition, and has been awarded many stipends in support of his studies and creations of new works. In 2020/21, Vincze was the winner of "Boris Papandopulo Prize” for the best Croatian composer of contemporary music, winner of the European Contemporary Composition Orchestra competition, winner of the best audiovisual work at the International Competition Città di Udine (Italy), as well as one of five awardees of 'New Music, New Paths' competition in Hong Kong, and he got selected for the artist residency both at the Institute of Electronic Music in Graz (2021) as well as at SWR Experimentalstudio (2022).

His piece "XinSheng" was selected for the production of Noperas! in Germany, and a new, expanded version under the new title "Freedom Collective" will premiere in numerous theatres in Germany (Gelsenkirchen, Bremen and Darmstadt) in 2024. Vincze will continue to develop this interdisciplinary opera project as part of his post-doctoral research, which he began this year (2023) at Hong Kong Baptist University.

 

 

Jennifer Walshe. Photo by Blackie Bouffant

Jennifer Walshe
Professor of Composition
University of Oxford
"13 Ways of Looking at AI, Art & Music"

Abstract: AI is not a singular phenomenon. We talk about it as if it’s a monolithic identity, but it’s many, many different things – the fantasy partner chatbot whispering sweet virtual nothings in our ears, the algorithm scanning our faces at passport control, the playlists we’re served when we can’t be bothered to pick an album. The technology is similar in each case, but the networks, the datasets and the outcomes are all different.

The same goes for art and music made using AI. We can listen to Frank Sinatra singing a cover of a rap song from beyond the grave, we can look at paintings made by robots, we can hang out in the comments section of a machine learning-generated death-metal livestream (‘sick drum solo bruh’). But the fact that artworks like these are made using AI doesn’t mean that they are all asking the same questions or have the same goals. We experience these works – and the way AI is used in them – in a multitude of ways.

So how should we think about art and music made with AI? Instead of looking for a definitive approach, one clean (and/or hot) take to rule them all, perhaps we can try to think like the networks do – in higher dimensions. From multiple positions, simultaneously. Messily. Not one way of looking at AI, but many.

Bio: “The most original compositional voice to emerge from Ireland in the past 20 years” (The Irish Times) and “Wild girl of Darmstadt” (Frankfurter Rundschau), composer and performer Jennifer Walshe was born in Dublin, Ireland. Her music has been commissioned, broadcast and performed all over the world. She has been the recipient of fellowships and prizes from the Foundation for Contemporary Arts, New York, the DAAD Berliner Künstlerprogramm, the Internationales Musikinstitut, Darmstadt and Akademie Schloss Solitude among others. Recent projects include TIME TIME TIME, an opera written in collaboration with the philosopher Timothy Morton, and THE SITE OF AN INVESTIGATION, a 30-minute epic for Walshe’s voice and orchestra, commissioned by the National Symphony Orchestra of Ireland. THE SITE has been performed by Walshe and the NSO, the BBC Scottish Symphony Orchestra and also the Lithuanian State Symphony Orchestra. Walshe has worked extensively with AI. ULTRACHUNK, made in collaboration with Memo Akten in 2018, features an AI-generated version of Walshe. A Late Anthology of Early Music Vol. 1: Ancient to Renaissance, her third solo album, released on Tetbind in 2020, uses AI to rework canonical works from early Western music history. A Late Anthology was chosen as an album of the year in The Irish Times, The Wire and The Quietus. Walshe is currently professor of composition at the University of Oxford. Her work was profiled by Alex Ross in The New Yorker.

Image courtesy of Blackie Bouffant

 

 

Maria Yablonina

Maria Yablonina
Assistant Professor at the Daniels Faculty of Architecture, Landscape, and Design, and a Faculty Member at the Robotics Institute
University of Toronto
"Not Not Collaborating with Machines"  

Bio: Maria Yablonina is an engineer, researcher, and artist working in the fields of computational design and digital fabrication. Her work lies at the intersection of architecture and robotics, producing spaces and robotic systems that can construct themselves and change in real-time. Such architectural productions include the development of hardware and software solutions, as well as complementing architectural and material systems in order to offer new design spaces.

Maria’s practice focuses on designing machines that make and occupy architecture — a practice that she broadly describes as Designing [with] Machines (D[w]M). D[w]M aims to establish design methodologies that consider robotic hardware development as part of the design process and its output. Rather than simply using available equipment and production technologies, Maria seeks to imagine new ways of engaging with spaces through the design of the machine body in response to its environment. Just like spiders who adapt to their context when constructing their webs, Maria’s machines crawl along walls and surfaces, exploring and augmenting them over time. Her machines are designed to operate within unglamorous environments of everyday spaces. D[w]M approaches robotics as architecture-specific rather than industry-specific design practice. Maria argues that this shift in approach affords a shift in priorities: away from demolition and rebuilding towards care and repair of building stock that is already there.Currently Maria is an Assistant Professor at the Daniels Faculty of Architecture, Landscape, and Design, and a Faculty Member at the Robotics Institute at the University of Toronto. In collaboration with Mitchell Akiyama, Maria runs MAYB studio (pronounced as abbreviation: M-A-Y-B). Working in installation, interactive

 

Sponsors

Directions