top of page
9.png
Search

Artificial intelligence (AI): The new era in diagnostic imaging.

QNR Team

Updated: Aug 26, 2023


In this article, we will review some of the most important historical points in the discovery and development of artificial intelligence, from the invention of the vacuum tube in 1906 to the present day. We will explain some of the most commonly used terms in artificial intelligence, and also the characteristics, advantages and benefits of the application of Artificial intelligence (AI), giving way to the new era in diagnostic imaging.




The development of artificial intelligence (AI) is a fascinating and complex field that has experienced significant advances throughout history. From its beginnings in the 20th century to the present day, AI has been shaped by various historical and scientific events that have laid the groundwork for its discovery and development.


A brief history of AI.


The vacuum tube and the birth of electronics (1906-1943):

As mentioned in the first article on AI, the vacuum tube or "audion," invented by Lee De Forest in 1906, was one of the first technological breakthroughs that paved the way for the development of AI. These devices allowed electrical signals to be amplified and switched, which was fundamental to the creation of electronic circuits. Over the following decades, scientists and technologists, such as Alan Turing, Claude Shannon and John von Neumann, laid the theoretical foundations of modern computing and established the mathematical foundations of artificial intelligence.


The Turing test and the birth of AI (1950):

In 1950, British mathematician and scientist Alan Turing proposed a test to determine a machine's ability to exhibit intelligent behavior. This test, known as the "Turing test," established a milestone in the development of AI by raising the possibility that machines could think. Although the Turing test is still the subject of debate, it laid the groundwork for future research and development in the field of AI.


The birth of cybernetics (1943-1956):

During World War II, Norbert Wiener and other scientists worked on the development of automatic control systems, leading to the birth of cybernetics. Cybernetics focused on the study of control and feedback systems in living organisms and machines. This discipline provided fundamental concepts for the development of AI by exploring how systems could simulate human behavior and learn from their environment.



The rise of the perceptron and the winter of AI (1956-1974):

In 1958, psychologists Frank Rosenblatt and Bernard Widrow developed the perceptron, a machine learning algorithm that allowed machines to learn and recognize patterns. This milestone generated great optimism in the scientific community and opened the door to new possibilities in the field of AI. However, in the 1970s, the lack of significant breakthroughs led to a period known as the "AI winter," where interest and funding declined due to the lack of practical results.


The renaissance of AI and deep learning (1980-2010):

In the late 1980s and early 1990s, there was a resurgence in the field of AI thanks to the development of new techniques and algorithms. In particular, the deep learning approach, inspired by the structure and functioning of the human brain, enabled machines to learn and extract complex features from massive data sets. This breakthrough paved the way for practical applications of AI, such as voice recognition, computer vision and virtual assistants, which were some of the first massive uses globally. Most of us probably used this technology on our phones, computers and TVs without knowing that it was Artificial Intelligence (AI).


Big Data and the AI revolution (2010-2016):

In recent years, the exponential growth of available data and storage and processing capacity has driven a revolution in AI. The use of machine learning algorithms and data mining techniques has enabled machines to extract valuable insights from large volumes of information, driving significant advances in areas such as image recognition, natural language processing and autonomous driving.


Recent advances and the future of AI (2016-present):

In recent years, advances in the field of AI have been remarkable, and they are becoming increasingly public knowledge and encompassing. The combination of deep learning algorithms, increased computer processing power and the availability of large data sets has led to significant improvements in areas such as machine translation, chatbots and AI-assisted medical diagnostics, among hundreds of applications, as we have seen in our previous article on this topic. In addition, advances have been made in the field of ethical AI, where challenges related to privacy, fairness and transparency in the use of this technology are being addressed.


AI: Some Definitions


To understand and learn a little more terminology related to Artificial Intelligence, we will explore some key terms more commonly used in AI, which will be part of our environment in the near future, if they are not already.

What is an algorithm in AI?

An algorithm is a sequence of logical steps or rules designed to solve a specific problem. Algorithms in AI can range from simple to highly complex and can address tasks such as classification, pattern recognition, decision making, among others. These algorithms are the basis for the operation of AI systems and are used to process data and make automated decisions.


What is machine learning?

Machine learning, also known as machine learning, is a branch of AI that focuses on developing algorithms that allow machines to learn from data and improve their performance over time. Machine learning algorithms fall into two main categories: supervised learning, where a training dataset with labels is provided; and unsupervised learning, where no labels are provided. Through machine learning, machines can recognize patterns and make predictions based on the available data.


What are neural networks?

Neural networks are a computational model inspired by the functioning of the human brain. These networks are composed of layers of interconnected nodes called artificial neurons. Each neuron takes inputs, performs a computation and produces an output. Through the process of training, neural networks adjust the weights of the connections between neurons to improve their ability to recognize patterns and perform specific tasks, such as image recognition or natural language processing. Neural networks have been instrumental in the advancement of deep learning and have driven many of the recent achievements in AI.


What is deep learning?

Deep learning is a branch of machine learning that focuses on the use of deep neural networks for data processing and analysis. Unlike traditional neural networks, deep networks contain multiple hidden layers, allowing them to learn hierarchical representations of data. Deep learning has achieved breakthroughs in areas such as speech recognition, natural language processing and image recognition, and has driven the development of applications such as virtual assistants and autonomous driving systems.


What is Data Mining?

Data mining is the process of discovering interesting patterns and relationships in large data sets. In the context of artificial intelligence, data mining plays a crucial role by providing valuable information that can be used to train AI models. Using statistical techniques and machine learning algorithms, data mining enables the discovery of hidden information in data, which can lead to a better understanding of phenomena and more informed decision making.


What is Natural Language Processing?

Natural language processing (NLP) is a branch of AI that deals with the interaction between computers and human language. NLP focuses on developing algorithms and models that enable machines to understand, interpret and generate human language in written or spoken form. This includes tasks such as speech recognition and generation, machine translation, sentiment analysis, and text information extraction. NLP has had broad applications in areas such as virtual assistance, customer service and large-scale text research.



What is computer vision?

Computer vision is a field of AI that focuses on enabling machines to understand and analyze images and videos. Through advanced algorithms and techniques, machines can extract visual features, recognize objects, detect faces, track moving objects, and more. Computer vision has found applications in a variety of areas, such as medicine, security, the entertainment industry and industrial automation. With advances in AI, computer vision has achieved increasingly accurate results and has surpassed human capabilities in some specific tasks.


What is robotics?

Robotics is an interdisciplinary field that combines artificial intelligence, mechanical engineering and electronics to design, build and operate robots. Robots are autonomous or semi-autonomous machines that can perform physical and cognitive tasks. With the integration of AI, robots can learn and adapt to their environment, make decisions based on sensory data, and perform more complex actions. Robotics and AI have been used in a wide range of applications, from industrial automation to medical care and space exploration.



What is intelligent automation?

Intelligent automation refers to the application of artificial intelligence and robotics to perform tasks and processes autonomously and efficiently. It combines machine learning algorithms, natural language processing, computer vision and robotic control to enable machines to perform complex tasks without human intervention. Intelligent automation has been used in a variety of industries, from manufacturing and logistics to financial services and healthcare, improving efficiency, accuracy and productivity.


Artificial Intelligence (AI) applied in medicine and diagnostic imaging.


We have already explained some of the most important historical milestones on the road to the discovery of MRI, X-Ray and CT imaging, as well as their technical scientific foundations previously in our articles, so now, we will talk specifically about the application of AI Artificial Intelligence that has demonstrated numerous benefits in improving and optimizing imaging with different diagnostic imaging methods.



Some of the most important benefits of applying AI in diagnosis:

Increased accuracy in diagnosis and more appropriate treatment: AI algorithms can analyze large volumes of images and provide accurate assessment of structures, pathologies and abnormalities. Resulting in earlier diagnoses and more successful treatments, which in turn will be more appropriate.


Improved efficiency: AI systems can automate repetitive and tedious tasks, which helps medical professionals by speeding up image analysis time. Making it very useful in emergency situations or when there is a shortage of medical personnel in disaster situations for example.


Early disease detection and classification: AI algorithms have proven to be effective in early detection, and classification of pathologies in images. AI can identify subtle patterns in images that medical professionals might miss. For example, early detection of brain tumors, cardiac abnormalities and neurodegenerative diseases, or those that might require surgical intervention.

Generation of high-resolution images: Researchers have used AI techniques, such as super-resolution, allowing the generation of detailed and sharp images, which greatly improve detection and diagnostic capabilities, reaching very small or subtle anomalies that the human eye is unable to detect.



Accelerated image reconstruction: AI has been used to accelerate the acquisition and reconstruction process of magnetic resonance images. This has significantly reduced scanning time and improved patient experience.


Augmented reality and real-time assistance: AI has been combined with augmented reality to provide physicians with real-time visualization during MRI procedures. This helps guide the intervention and improves the accuracy and safety of the procedure.


Automatic pathology detection: AI algorithms can be trained to automatically detect specific pathologies in CT images, such as tumors or aneurysms. This can be useful in the early detection and timely treatment of acute and serious diseases that require immediate intervention to save the patient's life.


Radiation dose reduction: AI has been used to develop radiation dose reduction techniques in computed tomography. By optimizing image acquisition parameters, adequate image quality can be obtained with a lower radiation dose, reducing the risk to patients.


Characteristics and differences of AI applied to diagnostic imaging.

Machine Learning: Machine learning is a key technique in the application of AI in MRI, CT and other diagnostic imaging methods. Machine learning algorithms can analyze large image data sets and learn complex patterns, which indicate the presence of disease or abnormality, to help clinicians interpret images more accurately.


Convolutional Neural Networks (CNNs): CNNs are a commonly used architecture in AI applied to medical imaging. They are networks specifically designed for image processing and can extract important features from images, such as edges, textures, shapes, tumors, lesions, or anatomical structures. This facilitates analysis and detection of anomalies by professionals.


Image segmentation and analysis: AI is used in image segmentation and analysis to identify and label different structures and tissues in the human body. This helps clinicians assess and quantify the size, shape and position of anatomical structures, as well as identify regions of interest and anomalies.



Artificial Intelligence (Ai) Applied to Mammograms.

Although the characteristics of the application of AI in diagnostic imaging are similar, we will review some features in a little more detail with respect to mammography, AI and breast cancer detection.


Detection and classification of abnormalities, improved accuracy of interpretation:

AI has been shown to be highly effective in detecting and classifying abnormalities in mammography images. Machine learning algorithms have been trained on thousands of images to identify features and patterns, much more quickly and accurately, that could indicate the presence of benign or malignant lesions or tumors. These algorithms can assist radiologists in the early detection of breast cancer and in reducing false positives and negatives. For example, studies have shown that AI-based systems can outperform human radiologists in terms of sensitivity and specificity, providing a reliable second opinion.


Personalization of medical care.

AI has also enabled greater personalization in the medical care of patients undergoing mammography. AI algorithms can analyze clinical data, such as family history and individual characteristics, to provide more accurate risk assessments. This helps physicians determine the need for follow-up examinations or additional testing, which in turn can improve early detection and treatment of breast cancer.


Conclusion


The history of artificial intelligence has been marked by important milestones, from early developments in electronics to recent advances in deep learning and Big Data. As we continue to explore the possibilities of AI, it is essential to address the ethical and societal challenges involved in its implementation. AI has demonstrated its potential to transform absolutely every sector and aspect of our lives. Its evolution continues to be an exciting and ever-developing area of research at an astonishingly rapid pace.

The application of artificial intelligence (AI) in diagnostic imaging has proven to be a significant breakthrough in the field of medicine. By detecting and classifying abnormalities, improving accuracy in interpretation, reducing analysis time, and personalizing medical care, AI has improved the ability of physicians to detect various pathologies at earlier stages and provide more effective treatments. As research and development continues in this field, artificial intelligence is expected to play an even greater role in improving medical care and patient outcomes: This is ...... a new era in diagnostic imaging.



PUB0011EN



About the Author:

Maria Soledad Gomez has 10+ years of industry experience working in a variety of roles within regulated industry, healthcare and medicine, including food/beverage, hospitals and veterinary medicine. Maria Sole writes technical articles on a wide variety of topics in the medical field.


About the Editor:

Brian Hoy has 20+ years of industry experience in medical devices and business formation, covering the complete life cycle with global scope. Brian consults for industry and gives general advisory and off-hours support.



 


Resources

Nature.com. Digital medicine. Multimodal machine learning in precision health: a scoping review

High-performance medicine: the convergence of human and artificial intelligence. Nature Medicine

Dartmouth Artificial Intelligence Conference. The birth of artificial intelligence

AI in Medical Imaging Informatics: Current Challenges and Future Directions. A. S. Panayides et al., "AI in Medical Imaging Informatics: Current Challenges and Future Directions," in IEEE Journal of Biomedical and Health Informatics, vol. 24, no. 7, pp. 1837-1857, July 2020, doi: 10.1109/JBHI.2020.2991043.

Pubmed. National Library of medicine. Mazurowski MA, Buda M, Saha A, Bashir MR. Deep learning in radiology: An overview of the concepts and a survey of the state of the art with focus on MRI. J Magn Reson Imaging. 2019 Apr;49(4):939-954. doi: 10.1002/jmri.26534. Epub 2018 Dec 21. PMID: 30575178; PMCID: PMC6483404.Quantitative Analysis of Magnetic Resonance Imaging in the Detection and Characterization of Prostate Cancer: A Systematic Review and Meta-analysis - European Urology Focus (2020).


IEEE Xplore. IEEE Wie lidearship summit. Impact of AI on medical imaging and healthcare

Unesco. Gran angular. Léxico de la inteligencia artificial.

Nature. NPJ. Precision oncology. A framework for artificial intelligence in cancer research and precision oncology. Perez-Lopez, R., Reis-Filho, J.S. & Kather, J.N. A framework for artificial intelligence in cancer research and precision oncology. npj Precis. Onc. 7, 43 (2023). https://doi.org/10.1038/s41698-023-00383-y

PubMed

Introduction to the Special Section on Award-Winning Papers from the IEEE Conference on Computer Vision and Pattern Recognition. Essa I, Kang SB, Pollefeys M. Guest Editors' Introduction to the Special Section on Award-Winning Papers from the IEEE Conference on Computer Vision and Pattern Recognition 2009 (CVPR 2009). IEEE Trans Pattern Anal Mach Intell. 2011 Dec;33(12):2339-40. doi: 10.1109/tpami.2011.215. PMID: 26807448.

International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI)




 









Comentarios


Location & Contact Info:
Standard Office Hours:
Mon-Fri 8am-6pm EST (UTC-05:00)
(Other Hours Available by Appointment)

United States:

Waterford at Blue Lagoon

6303 Waterford District Drive, Suite 400

Miami, FL 33126  USA

info@medtechbiz.com

© 2025 by QNR Corporation

bottom of page