Radiology requires countless hours searching for tiny lesions, creating distance and contour annotations, and filling out checklists to determine stages of disease - these tasks are onerous and error-prone, resulting in high costs and frequent misdiagnoses. Thankfully, the global impact of deep learning is now improving this process for radiologists.
Using the latest deep learning technology in an intelligent cloud platform, Arterys
, a startup focused on streamlining the practice of medical image interpretation and post-processing, is working to address these deficiencies. Using a AI-based contouring algorithm, the company is reducing the time required to calculate ventricular volumes from 30-60 minutes, to just a handful of seconds.
Dan Golden, Director of Machine Learning at Arterys, will join us at the Deep Learning in Healthcare Summit
, to share the technology behind their software and how they strategised for proving its safety and efficacy to become the first technology ever to be cleared by the FDA that leverages cloud computing and deep learning in a clinical setting.
I asked Dan a few questions ahead of the summit to learn more.
Can you tell us a bit more about your work?
I’m the Director of Machine Learning at Arterys, where we have an incredible team of deep learning researchers creating the next generation of clinical radiological decision support systems. We’re a small team, so my responsibilities are quite broad; beyond coordinating team projects, I work on everything from data sourcing and cleaning, to model training, to scientific study design for regulatory clearance. Our team has done some really amazing work; we were really excited to get FDA clearance in January 2017 for our first deep learning-based product. That product, which is a web-based, zero-footprint, cardiac MRI postprocessing suite, is the first technology combining cloud and deep learning to be cleared by the FDA.How did you begin your work in the deep learning field?
I started working on medical machine learning while a postdoc in the Stanford radiology department in 2012. I worked on a few different projects, using MRI and CT images to predict outcomes for cancer patients. At the time, we were still making models using hand-engineered features, with the lofty goal of calculating p-values below 0.05. Back then, the prospect of creating a real clinical application seemed far away. Once I moved into industry in 2013, the need to create a real commercial product inspired me to focus on the latest and greatest technology which, at the time, was the nascent field of deep learning. I’ve never looked back; with hand-crafted features you can certainly publish a paper, but with deep learning you can go so much further and really create a transformative product.
What are the key factors that have enabled recent advancements in medical imaging?
At Arterys, we’re confident that the future of medical imaging will be in the cloud. Cutting edge deep learning research is certainly crucial, but equally important are how the recent advances in cloud infrastructure allow our products to scale along with the deep learning infrastructure that underlies them. GPU-enabled cloud instances and the proliferation of worldwide availability regions have been critical to the international success of our product. Our application is 100% cloud-based and the distributed architecture that allows us to process multi-gigabyte studies with real-time distributed rendering and deep learning inference in dozens of countries simultaneously would not have been possible even a few years ago.
Which areas of healthcare do you think deep learning will benefit the most and why?
Advances in deep learning in the last few years have transformed the fields of both computer vision and natural language processing. It’s no surprise that medicine can benefit from these advances given the copious amount of images and free text in patient medical records. Previous attempts to automate the prediction of patient outcomes were crippled by the inability to efficiently process unstructured medical data, such as clinician-dictated free-text reports and radiological and histopathological images. With deep learning, these extremely important data sources can now be subsumed, which will allow automated systems to be accurate enough to really influence patient care.
What deep learning advancements in healthcare would you hope to see in the next 3 years?
A patient’s electronic medical record is an incredibly complicated morass of disparate data types; it can include clinicians’ free-text notes, confirmed or suspected symptoms and diagnoses, billing codes, radiological images and reports, lifestyle information, laboratory test results, and so much more. The human brain is well equipped to make sense of this multimodal information for individual patients, but any individual deep learning model is not. Recent work on deep learning-based image captioning and text-based image retrieval gives me hope that we’ll soon be able to combine all these sources of data into one beautiful and efficient model; the most accurate predictive model will surely be the one that can understand the electronic medical record in its entirety.
What do you see in the future for Arterys?
Although we’ve worked hard to automate some of the most complicated parts of the cardiac post-processing workflow, we’re not done yet. We expect to continue adding automated features to our cardiac product, while also expanding our product offerings to include efficient workflows and automation for other diseases. We’ve only scratched the surface of what deep learning can do in this space, and we’re excited to keep making radiologists’ jobs easier and more effective!
Can't make it to Boston? Join us at the Machine Intelligence in Healthcare Summit in Hong Kong on 9-10 November. View all upcoming events here
|Dan Golden will be speaking at the Deep Learning in Healthcare Summit in Boston on 25-26 May, taking place alongside the annual Deep Learning Summit. Confirmed speakers include Junshui Ma, Senior Principal Scientist, Merck; Nick Furlotte, Senior Scientist, 23andMe; Muyinatu Bell, Assistant Professor, John Hopkins University; Saman Parvaneh, Senior Research Scientist, Philips Research; David Plans, CEO, BioBeats; and Fabian Schmich, Data Scientist, Roche. View more details here.|
Tickets are now limited for this event. Book your place now.
.Opinions expressed in this interview may not represent the views of RE•WORK. As a result some opinions may even go against the views of RE•WORK but are posted in order to encourage debate and well-rounded knowledge sharing, and to allow alternate views to be presented to our community.