/
Article

Machine Learning at Work

MAY 01, 2022
Member Contributor

John Burkey

John Burkey

John Burkey, Founder, CEO, and CTO of Brighten Ai

BS in Physics, History, and Computer Science

A few years ago, I was looking for something new to do and I called my buddies at Apple. They told me that Siri was super interesting, but they were kind of stuck. I joined the Siri team and have been in voice recognition since. Along the way I realized that people had bolted together a bunch of components but didn’t really understand how any of them worked. So I founded Brighten Ai, which takes a physicists’ approach to voice recognition. We’ve broken everything down to first principles, looked at all the components, and reassembled them with the end goal in mind.
A big part of our work is in architecting things. When we listen to a person talk, we get a lattice of possibilities for every 10-millisecond frame, where any one of several phonemes could exist. It’s like we have a ball of probabilities of sounds and words. We compile this for all the frames and explore what the person is trying to say with grammar, vocabulary, and knowledge structures that all work together. This generates more probabilities, and the system picks the most likely one based on context―that’s another n-dimensional space. We call it dialogue.
Our goal is to provide companies with a platform on which they can build a product. In this new phase of the high-tech industry, there’s more room for suppliers and smaller shops to participate in the bigger ecosystem. It’s a place where scientists can have a high degree of impact.


Sean Grullon

Sean Grullon

Sean Grullon, Lead AI Scientist at Proscia

BS, MS, and PhD in Physics

I am the lead AI research scientist for Proscia, a medical imaging startup company. We research machine learning algorithms for computer vision in order to analyze medical images. If you get a tissue biopsy that needs to be screened for cancer, a pathologist examines it under an optical microscope to look for evidence of a cancerous tumor. This is time-intensive and difficult. We’re developing tools to help pathologists diagnose and triage cancer faster. We’re focused on melanoma, the deadliest form of skin cancer.
On a day-to-day basis, I work with algorithms that we’re researching in-house or that have worked well in other domains to see how well they work for medical imaging. We don’t have a product on the market yet, but we published some good results from our AI algorithm last October.
I worked with machine learning a couple of times during my physics PhD research, but it was just one of many different techniques I dabbled with to get results. Machine learning started to take off when I graduated, and I’ve been applying it in the healthcare or pharmaceutical space ever since. I’ve found my path rewarding, very interesting, and very impactful. It’s gratifying to see the clinical impact of your work.


Helen Jackson

Helen Jackson

Helen Jackson, Machine Learning Researcher

BS, MS, and PhD in Physics

I knew at the age of 12 that I wanted to be a physicist, but it was a convoluted pathway. After many curves I made it to physics graduate school at Fisk University and then at Vanderbilt. But Vanderbilt didn’t allow graduate students to work, and I needed an income to support my family. I eventually accepted an invitation to work at the Air Force Research Lab, studying radiation effects on electronics, and simultaneously completed my PhD at the Air Force Institute of Technology as a civilian. It was a long path, but I made it.

I became a visiting physics professor for a while and taught myself machine learning during my free time. Eventually, I was offered a data science contract job. The job was my first introduction to machine learning as a physicist―using computer vision to detect threats in the cluttered airport environment via X-ray scanning. I also worked on predicting failures in airport equipment. It was fascinating.
During my next position, I leveraged multidomain machine learning for an array of military applications, from differentiating the signatures of bombs from earthquakes, to bio applications such as biotechnology and bioterrorism. Next, I continued as a government contractor, combining computer vision with natural language processing for complex document understanding, among other projects. Recently, I was retained as a consultant to apply machine learning and data analytics to epidemiology data. Once you grasp machine learning and have a core knowledge of the basic sciences—biology, chemistry, physics—you’re prepared to work in many applications.


Michelle Kuchera

Michelle Kuchera

Michelle Kuchera, Assistant Professor of Physics at Davidson College

BS and MS in Physics, MS and PhD in Computational Science

I’m a professor and principal investigator of the Algorithms for Learning and Physics Applications (ALPhA) group at Davidson College. We collaborate with physicists at various facilities across the country and world to help them develop AI solutions for nuclear and particle physics tasks. This includes data processing, data analysis, and making theoretical predictions.
For example, we use machine learning methods to help detector physicists and experimental physicists select out interesting particle interactions for further study in their experiments. We also use machine learning to make fast predictions. In some cases, the calculations for a theoretical prediction would take an extremely long time, but with machine learning we can build a surrogate model that does them much faster.
Machine learning isn’t the solution to every challenge. If you have a solid understanding of the physics and the explicit mathematical rules to accomplish a task that you’re interested in, then that is the preferred method, unless there’s some challenge with implementation. However, machine learning has the potential to advance scientific discovery in areas where there are computational challenges.


Chris Rowen

Chris Rowen

Chris Rowen, VP of Engineering for Webex Collaboration AI at Cisco

BA in Physics, PhD in Electrical Engineering

My team works on the machine learning and AI building blocks that become key components of how our video conference calling platform, WebEx, works. For example, we extract different elements of an audio stream―voices, noise, reverberations, commands, keywords―and rearrange them to do things like eliminate noise, reduce reverberations, and identify keywords to act on. We do this in real time with a latency of roughly 20 milliseconds, which is virtually undetectable.
Similarly, we decompose complex video streams into important elements. Where are your hands? Where’s your face? What gestures are you making? We can do a three-dimensional model extraction of your face and enhance it. We can change the lighting and give you a haircut. We are increasingly able to do things like track a person’s gaze to see what they’re paying attention to.
We’re also doing a lot in natural language processing, such as taking transcripts of meetings and extracting out important events, comments, or commitments. And then we’re doing deep analytics, looking at interactions within an organization. Who spends time talking to whom? Who is this person falling out of touch with? What do these things mean about professional relationships? From these deep analytics and machine learning, we improve the clarity and intuitiveness of the system’s response. //

This Content Appeared In
sps-observer-spring-2022-cover-web.jpg