Deep(ly) Learning About Deep Learning

Ishmam Khan ’25

Figure 1: By using a neural network, tumor-immune responses can be gauged and visualized to detect certain antigens in cancers.

Machine learning is the ability of artificial intelligence to build a model based on previously collected data and use it to identify patterns in a way that simulates human behavior. Many applications branch off from machine learning, such as bioinformatics, the intersection of technology and biology. Recently, researchers at Stony Brook University studied a process called deep learning, a subset of machine learning that allows neural networks to attempt to simulate the behavior of the human brain. They aimed to compare existing deep-learning softwares that analyze microscope images of tumor-immune interactions between cells and pancreatic ductal adenocarcinoma (PDC) cancer. 

First, PDC tissues were tagged for cancer markers using immunohistochemistry and immunofluorescence. Microscopic images of these tagged tissues were taken with brightfield microscopy and then analyzed with one of three cell microscopy identification softwares: Color AE, which monitors color decomposition in the cell; UNet, which partitions cells in order to track unique features of the immune cells whilst observing their behavior; and Color AE:UNet, which combines the two softwares into one. Researchers found Color AE and UNet had their own strengths and weaknesses. ColorAE performed generally better than U-Net at correctly detecting and classifying multicolored immune cells as ColorAE was able to detect lighter colored immune cells. U-Net, however, outperformed ColorAE in recognizing finer colors that required segmentation. Furthermore, the inherent difficulty in color decomposition analysis due to subtle differences in staining patterns coupled with overlapping color spectra also posed problems to the softwares. Intense yellow and light black could both appear brown, but two separate softwares would classify that color differently. Furthermore, researchers found that combining Color AE and U-Net analysis outperformed Color AE and U-Net alone. Thus, they concluded that no one universal method could be the best when targeting these markers and that using multiple methods improves the overall reliability of deep learning.

Overall, the researchers found this digital annotation method much faster than the manual alternative, dramatically reducing analysis time, labor and cost. The report emphasizes the necessity of refinement in this type of programming. However, if more research is done into deep learning and understanding the application, then it has the potential to revolutionize clinical treatment both in cost and efficiency. Thus, the field of bioinformatics emphasizes the value of transcending barriers to improve healthcare in the modern era.

Works Cited:

[1] D.J. Fassler, et al., Deep learning-based image analysis methods for brightfield-acquired multiplex immunohistochemistry images. Diagnostic Pathology, 1, 1-11 (2020). doi:10.1186/s13000-020-01003-0

[2] Image retrieved from: https://pixabay.com/vectors/a-i-ai-anatomy-2729782/

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s