Biases in Generative Art: A Causal Look from the Lens of Art History
With rapid progress in artificial intelligence (AI), popularity of generative art has grown substantially. From creating paintings to generating novel art styles, AI based generative art has showcased a variety of applications. However, there has been little focus concerning the ethical impacts of AI based generative art. In this work, we investigate biases in the generative art AI pipeline right from those that can originate due to improper problem formulation to those related to algorithm design. Viewing from the lens of art history, we discuss the socio-cultural impacts of these biases. Leveraging causal models, we highlight how current methods fall short in modeling the process of art creation and thus contribute to various types of biases. We illustrate the same through case studies, in particular those related to style transfer. Finally, we outline a few pointers that would be useful to consider while designing generative art AI pipelines.
Ramya Srinivasan, Ph.D, is an AI researcher with Fujitsu Laboratories of America. In this role, Ramya is involved in the design and development of fair and explainable AI solutions, considering the requirements of various stakeholders involved in the pipeline. Ramya’s research interests are in the broad areas of computer vision, explainable AI, causality, and AI ethics.