Microsoft Azure Blog

See how 3 industry-leading companies are driving innovation in a new episode of Inside Azure for IT


In this episode, you’ll get a behind-the-scenes look at how three companies are using cutting-edge technologies like high-performance computing, Quantum, and AI to solve complex challenges, power innovation, and generate new kinds of business impact.

I had the awesome opportunity to talk with a few people innovating with some of the most exciting next-generation tech in our latest episode of the Inside Azure for IT fireside chat series. Many of us, myself included, spend a lot of time focused on challenges that need to be addressed today—in this minute—leaving less time for creativity and longer-range planning. The same is true for many organizations. When businesses are faced with downtime, traditional hardware restrictions, or have to adapt quickly to new changes afoot, it can limit productivity and stifle innovation.

What we hear from IT leaders is that digital transformation becomes a reality when they can go from doing their job despite technology limitations to innovating and delivering on priorities because of the technology they’re using—specifically global, cloud-based infrastructure.

In this episode, you’ll get a behind-the-scenes look at how three companies are using cutting-edge technologies like high-performance computing, Quantum, and AI to solve complex challenges, power innovation, and generate new kinds of business impact.

Driving innovation across industries with Azure

The episode is divided into three separate segments so you can watch them individually on-demand, at your convenience.

Part 1: Jeremy Smith and Karla Young on how Jellyfish Pictures virtualized their entire animation and visual effects studio with Azure

In this segment, you’ll hear from Jeremy Smith, CTO, and Karla Young, Head of PR, Marketing, and Communications at Jellyfish Pictures about how they create the amazing visuals we see in movies like How to Train Your Dragon: Homecoming, or some of the recent Star Wars films—both big favorites for my family! Using Azure high-performance computing to accelerate image rendering, they can spin up tens of thousands of cores at a moment’s notice and manage all that rich content securely in a single place, without replication.
Watch now: Virtualizing animation with Azure high-performance computing.

Part 2: Anita Ramanan and Viktor Veis on using quantum computing to address a complex scheduling challenge for NASA’s Jet Propulsion Laboratory

In the second segment, I’m joined by members of the Azure Quantum team—Anita Ramanan, Technical Program Manager Lead for Optimization in Azure Quantum, and Viktor Veis, Azure Quantum Group Software Engineering Manager—to talk about a project they worked on with NASA’s Jet Propulsion Laboratory. They share how they used quantum-inspired algorithms to create schedules for spacecraft communications in minutes rather than hours—and how Azure Quantum can address similar challenges in almost every industry, from manufacturing to healthcare.
Watch now: A quantum-inspired approach to scheduling communications in space.

Part 3: Alex Oelling on how Volocopter is powering an urban air mobility ecosystem of self-flying air taxis and drone services with Azure infrastructure and AI

In the third segment, I chat with Alex Oelling, Chief Digital Officer at Volocopter about how they are bringing urban air travel to life in major cities. A true pioneer in providing air taxi and drone services in urban environments, Volocopter is building a cloud-based solution to work with smart cities and existing mobility operations using Azure infrastructure and AI.
Watch now: Pioneering urban air travel in major cities with Azure infrastructure and AI.

When we launched Inside Azure for IT last July, our goal was to create a place where cloud professionals could come to learn Azure best practices and insights that would help them transform their IT operations. Whether you’ve tuned in for our live ask-the-experts sessions, watched deep-dive skilling videos, or joined us for fireside chats—we want to say "thank you" for engaging with us and bringing us your hardest questions.

Stay current with Inside Azure for IT

Beyond this latest episode, there are many more technical and cloud-skilling resources available through Inside Azure for IT. Learn more about empowering an adaptive IT environment with best practices and resources designed to enable productivity, digital transformation, and innovation. Take advantage of technical training videos and learn about implementing these scenarios.

Responsible AI investments and safeguards for facial recognition


Azure Cognitive Services deliver high-quality, consent-driven face recognition that developers use to power verification of human identities on mobile, desktop, and internet of thing (IoT) devices, as well as facial detection and redaction capabilities for accessibility, modern productivity, and privacy.

A core priority for the Cognitive Services team is to ensure its AI technology, including facial recognition, is developed and used responsibly. While we have adopted six essential principles to guide our work in AI more broadly, we recognized early on that the unique risks and opportunities posed by facial recognition technology necessitate its own set of guiding principles.

To strengthen our commitment to these principles and set up a stronger foundation for the future, Microsoft is announcing meaningful updates to its Responsible AI Standard, the internal playbook that guides our AI product development and deployment. As part of aligning our products to this new Standard, we have updated our approach to facial recognition including adding a new Limited Access policy, removing AI classifiers of sensitive attributes, and bolstering our investments in fairness and transparency.

Safeguards for responsible use

We continue to provide consistent and clear guidance on the responsible deployment of facial recognition technology and advocate for laws to regulate it, but there is still more we must do.

Effective today, new customers need to apply for access to use facial recognition operations in Azure Face API, Computer Vision, and Video Indexer. Existing customers have one year to apply and receive approval for continued access to the facial recognition services based on their provided use cases. By introducing Limited Access, we add an additional layer of scrutiny to the use and deployment of facial recognition to ensure use of these services aligns with Microsoft’s Responsible AI Standard and contributes to high-value end-user and societal benefit. This includes introducing use case and customer eligibility requirements to gain access to these services. Read about example use cases, and use cases to avoid, here. Starting June 30, 2023, existing customers will no longer be able to access facial recognition capabilities if their facial recognition application has not been approved. Submit an application form for facial and celebrity recognition operations in Face API, Computer Vision, and Azure Video Indexer here, and our team will be in touch via email.

Facial detection capabilities (including detecting blur, exposure, glasses, head pose, landmarks, noise, occlusion, and facial bounding box) will remain generally available and do not require an application.

In another change, we will retire facial analysis capabilities that purport to infer emotional states and identity attributes such as gender, age, smile, facial hair, hair, and makeup. We collaborated with internal and external researchers to understand the limitations and potential benefits of this technology and navigate the tradeoffs. In the case of emotion classification specifically, these efforts raised important questions about privacy, the lack of consensus on a definition of “emotions,” and the inability to generalize the linkage between facial expression and emotional state across use cases, regions, and demographics. API access to capabilities that predict sensitive attributes also opens up a wide range of ways they can be misused—including subjecting people to stereotyping, discrimination, or unfair denial of services.

To mitigate these risks, we have opted to not support a general-purpose system in the Face API that purports to infer emotional states, gender, age, smile, facial hair, hair, and makeup. Detection of these attributes will no longer be available to new customers beginning June 21, 2022, and existing customers have until June 30, 2023, to discontinue use of these attributes before they are retired.

While API access to these attributes will no longer be available to customers for general-purpose use, Microsoft recognizes these capabilities can be valuable when used for a set of controlled accessibility scenarios. Microsoft remains committed to supporting technology for people with disabilities and will continue to use these capabilities in support of this goal by integrating them into applications such as Seeing AI.

Responsible development: improving performance for inclusive AI

In line with Microsoft’s AI principle of fairness and the supporting goals and requirements outlined in the Responsible AI Standard, we are bolstering our investments in fairness and transparency. We are undertaking responsible data collections to identify and mitigate disparities in the performance of the technology across demographic groups and assessing ways to present this information in a way that would be insightful and actionable for our customers.

Given the potential socio-technical risks posed by facial recognition technology, we are looking both within and beyond Microsoft to include the expertise of statisticians, AI/ML fairness experts, and human-computer interaction experts in this effort. We have also consulted with anthropologists to help us deepen our understanding of human facial morphology and ensure that our data collection is reflective of the diversity our customers encounter in their applications.

While this work is underway, and in addition to the safeguards described above, we are providing guidance and tools to empower our customers to deploy this technology responsibly. Microsoft is providing customers with new tools and resources to help evaluate how well the models are performing against their own data and to use the technology to understand limitations in their own deployments. Azure Cognitive Services customers can now take advantage of the open-source Fairlearn package and Microsoft’s Fairness Dashboard to measure the fairness of Microsoft’s facial verification algorithms on their own data—allowing them to identify and address potential fairness issues that could affect different demographic groups before they deploy their technology. We encourage you to contact us with any questions about how to conduct a fairness evaluation with your own data.

We have also updated the transparency documentation with guidance to assist our customers to improve the accuracy and fairness of their systems by incorporating meaningful human review to detect and resolve cases of misidentification or other failures, by providing support to people who believe their results were incorrect, and by identifying and addressing fluctuations in accuracy due to variation in operational conditions.

In working with customers using our Face service, we also realized some errors that were originally attributed to fairness issues were caused by poor image quality. If the image someone submits is too dark or blurry, the model may not be able to match it correctly. We acknowledge that this poor image quality can be unfairly concentrated among demographic groups.

That is why Microsoft is offering customers a new Recognition Quality API that flags problems with lighting, blur, occlusions, or head angle in images submitted for facial verification. Microsoft also offers a reference app that provides real-time suggestions to help users capture higher-quality images that are more likely to yield accurate results.

Azure Blog AI

To leverage the image quality attribute, users need to call the Face Detect API. See the Face QuickStart to test out the API.

Looking to the future

We are excited about the future of Azure AI and what responsibly developed technologies can do for the world. We thank our customers and partners for adopting responsible AI practices and being on the journey with us as we adapt our approach to new responsible AI standards and practices. As we launch the new Limited Access policy for our facial recognition service, in addition to new computer vision features, your feedback will further advance our understanding, practices, and technology for responsible AI.

Learn more at the Limited Access FAQ.