Accessibility, responsibility, and ethics in design

News you can use from Design Twitter

Accessibility, responsibility, and ethics in design

Social media platforms have been dominating the news lately, whether it was the recent Twitter security breach, Congress’ antitrust hearings, or the potential TikTok ban or sale to Microsoft. These current events make it abundantly clear why it’s more important than ever to critically think about the impact of the digital products we’re creating. How do we make sure the products we’re building take accessibility, responsibility, and ethics into account? The good news is that the conversations on Design and Tech Twitter are growing, hopefully leading to the adoption of better and more humane practices.

Designing with accessibility in mind

In a video that went viral on Twitter, Kristy Viers shared how she uses her iPhone as someone who’s visually impaired. She’s since created a YouTube channel to share how she uses other devices.

Should robots design?

GPT-3 is a machine learning model created by OpenAI, an AI research and deployment company based in San Francisco, California. By simply priming it with a few words or sentences, the system can write stories, poems, articles, and working code. Jordan Singer, a designer at CashApp, shared a demo of a Figma plugin he created using GPT-3.

Lolita Taub, operator and investor, uncovers the sexism behind GPT-3. Because GPT-3’s training data is collected from the internet, it has learned from human behavior, which can lead to bias. Lolita believes we can raise AI to address human challenges — watch her TEDx Talk on the subject.

Looking at software with a critical lens

Digital designer Carolyn Zhang shared a framework to capture career tracks within the product design industry. In the thread, other product designers responded with their thoughts on where the industry is and where it’s headed. She asks if looking at software with a critical lens and examining its impact on society can be a career choice.

Battling AI bias in facial recognition technology

Carolyn’s question can be answered by founder of Algorithmic Justice League and computer scientist, Joy Buolamwini. She has been researching algorithmic bias since 2016. Spurred by the ongoing protests over systemic racism, her groundbreaking studies helped persuade corporate giants including Amazon, IBM, and Microsoft to stop their facial recognition technology projects. Read more about Joy’s work in Fast Company’s Creative People issue.

Want some additional reading on design ethics? Check out Matthew Ström’s Ethics for Designers series and Kelly Small’s new book The Conscious Creative. How are you or your organization incorporating ethics into your design process? Share with us on Twitter at @goabstract.


Illustration by
Daina Lightfoot
Illustration by
Daina Lightfoot
Illustration by
Daina Lightfoot
Illustration by
Published on
August 13, 2020

Next Article

Heading

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

click to READ MORE