Physicians sometimes make mistakes when analyzing medical images. Systemic diseases, such as lupus and diabetes, require multiple types of medical images to support diagnoses, leaving a higher margin for human error in detection.
While image detection software often tackles object recognition, detection and segmentation separately, BiomedParse unifies these three crucial steps of detection, offering a more cohesive and intelligent method of analyzing images for clinicians.
Sheng Wang, an assistant professor in the University of Washington’s Paul G. Allen School of Computer Science & Engineering, collaborated with researchers from Microsoft Research and Providence Genetics and Genomics to develop BiomedParse. Their findings were recently published in Nature Methods.
Professor explained in a recent interview with UW News that the model is, “Like a search engine for medical images.” He continued that it, “Can enable doctors to understand images much better,” while adding that the technology, “… Is a way to augment their skills, not replace them.”