Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Open access
  • Published: 18 January 2023

Explainable artificial intelligence for mental health through transparency and interpretability for understandability

  • Dan W. Joyce 1 , 2 ,
  • Andrey Kormilitzin 1 ,
  • Katharine A. Smith   ORCID: orcid.org/0000-0003-2679-1472 1 , 3 , 4 &
  • Andrea Cipriani   ORCID: orcid.org/0000-0001-5179-8321 1 , 3 , 4  

npj Digital Medicine volume  6 , Article number:  6 ( 2023 ) Cite this article

11k Accesses

44 Citations

12 Altmetric

Metrics details

  • Computational biology and bioinformatics
  • Health care

The literature on artificial intelligence (AI) or machine learning (ML) in mental health and psychiatry lacks consensus on what “explainability” means. In the more general XAI (eXplainable AI) literature, there has been some convergence on explainability meaning model-agnostic techniques that augment a complex model (with internal mechanics intractable for human understanding) with a simpler model argued to deliver results that humans can comprehend. Given the differing usage and intended meaning of the term “explainability” in AI and ML, we propose instead to approximate model/algorithm explainability by understandability defined as a function of transparency and interpretability. These concepts are easier to articulate, to “ground” in our understanding of how algorithms and models operate and are used more consistently in the literature. We describe the TIFU (Transparency and Interpretability For Understandability) framework and examine how this applies to the landscape of AI/ML in mental health research. We argue that the need for understandablity is heightened in psychiatry because data describing the syndromes, outcomes, disorders and signs/symptoms possess probabilistic relationships to each other—as do the tentative aetiologies and multifactorial social- and psychological-determinants of disorders. If we develop and deploy AI/ML models, ensuring human understandability of the inputs, processes and outputs of these models is essential to develop trustworthy systems fit for deployment.

Similar content being viewed by others

artificial intelligence in mental health care research paper

Solving the explainable AI conundrum by bridging clinicians’ needs and developers’ goals

artificial intelligence in mental health care research paper

Multiple stakeholders drive diverse interpretability requirements for machine learning in healthcare

What do algorithms explain the issue of the goals and capabilities of explainable artificial intelligence (xai), introduction.

In this review article, we examine explainable AI ("XAI”) in the specific context of psychiatric/mental health applications. Across healthcare, there is an emerging skepticism for the ambitions of general XAI, with recommendations 1 , 2 to avoid so-called “black box” models altogether. An AI is opaque or “black-box” when the computational mechanisms intervening between an input and the AI’s output are too complex to afford a prima facie description of why the model delivered that output—the exemplar case being deep neural networks where computational complexity affords remarkable flexibility usually at the cost of increasing opacity. Historically, inductive data-driven methods were considered difficult for humans to understand and this was recognised in early applications of AI in medicine 3 where research favoured the explicit capture of clinical heuristics using symbolic propositions and inference mechanisms imported from formal logic. Similarly, when developing MYCIN 4 the authors preferred decision trees because “in order to be accepted by physicians [the system] should be able to explain how and why a particular conclusion has been derived”. In mental health, the need for explainability was articulated in early AI-based diagnostic applications; for example, in developing DIAGNO-II 5 statistical methods (linear discriminant functions and Bayesian classification) were compared to decision trees. In the absence of any clear performance advantages between the three methods, the authors concluded that decision trees were preferred because the data, the system’s structure and the computations performed all stood in close correspondence with clinicians’ domain knowledge alongside an assumption that clinicians use a similar sequential rule-in/rule-out style of reasoning when making diagnoses.

These examples center the structure and functioning of algorithms and suggest both should stand in close correspondence with a putative model of how clinicians reason with information about patients. Here, model structure refers to the model’s parameterisation whereas function refers to the computational processes that transform inputs to outputs (a concrete and tutorial example is given in Supplementary Information) . In the contemporary literature, this is described as “intrinsic interpretability” 2 . As most contemporary AI methods used in healthcare applications are inductive, data-driven and very often, “black-box” (particularly given the popularity of deep learning methods) intrinsic interpretability’s prescription for a human-understandable correspondence between inputs and outputs of the black-box model are absent resulting in the development of post-hoc techniques where another algorithm operates in parallel to the ‘main’ black-box model to provide “explanations”.

The fundamental reason for pursuing explainability is that healthcare professionals and patients must be able to trust AI tools; more precisely, a trustworthy AI implies that human actors may rely 6 on the tool to the extent they can economise on human oversight, monitoring and verification of the system’s outputs. To trust a deployed algorithm or model, we must have some understanding of how it arrives at an output, given some input. We therefore propose a framework for transparent and interpretable AI (Fig. 1 ), motivated by the principle of trustworthiness 7 using the following rubric:

figure 1

The TIFU framework operationalises “explainability” by focusing on how a model can be made understandable (to a user) as a function of transparency and interpretability (both definitions are elaborated in the main text). Algorithms and models will differentially satisfy these requirements and we show the example of logistic regression (in green, at the top of the diagram) as exemplifying a transparent and interpretable model.

Definition (Understandable AI). For an AI to be trustworthy, it must be valid, reliable and understandable. To be understandable, an AI must be transparent and interpretable and this is an operationalised approximation for explainability.

In what follows, we present the TIFU framework (Transparency and Interpretability For Understandability) focusing on understandability as a composite of transparency and interpretability. The important concepts of model reliability and validity are beyond the scope of this work but have received attention and are well described in the literature 8 ; briefly, to be reliable and valid, a model’s predictions or outputs must be calibrated and discriminating with respect to an observed outcome (or ground truth) and in addition, be generalisable (i.e. externally validated) so the model remains accurate and useful when deployed on new data not used during the model’s development 9 , 10 , 11 .

We proceed by first, surveying the mental health and psychiatric literature claiming to deliver explainable AI in a variety of application domains. Then, we highlight the connections to existing literature, drawing together consistent and concrete definitions that support the TIFU model. Throughout, we adhere to a convention of discussing understandability rather than “explainability”. Finally, we conclude with observations and recommendations for building systems that adhere to TIFU.

Diverse definitions

To motivate our proposal, we searched PubMed and established that specific applications of XAI in mental health and psychiatry first began appearing in 2018 shortly after the inaugural International Joint Conference on Artificial Intelligence Workshop on Explainable AI in 2017 12 . We then surveyed papers published from 1st January 2018 through 12th April 2022 to examine how the term “explainable” is used in this literature. We found a diversity of definitions with a loose—often vernacular—meaning. We located 25 papers eligible for review, of which 15 were original research and 10 were reviews (see Supplementary Information for search details).

In Table 1 , we summarise the 15 original research articles, grouped by the application (predominantly, in neuroimaging and survey data). Notably, in neuroimaging applications, where deep learning methods were most used, we found that the definition of explainability almost always defers to the XAI method or technique used (which were most often feature importance methods e.g. Shapley 13 and LIME 14 ). Occasionally 15 , 16 , methods with explainability or interpretability “by design” 2 were used; these papers both used regression-based methods. Further, only three papers 17 , 18 , 19 evaluated their proposed explainable AI with respect to how humans might make use of the explanations—arguably, an essential ground-truth for a successful XAI implementations. The situation was notably different in studies making use of survey data where evaluation of how humans might understand the AI’s inferences, discoveries or predictions were more common as were studies that attempted a more explicit definition of what they intended “explainability” to mean and the same studies were less likely to simply defer to the methods used.

The uses of AI were most often a combination of prediction and discovery (8 of the 15 studies); by this, we mean that while e.g. classifiers were built to discriminate between patients and controls (with the intent of making predictions for new patients), often, the trained models were then dissected to provide insights on the high-dimensional multivariate input data—similarly to how inferential methods are used in classical statistical analyses. This may signal that when researchers are faced with multivariate data but an absence of clear a priori knowledge about the application that would assist engineering a solution, the flexibility of supervised learning delivers automated feature selection. It is no surprise that this approach was prevalent in neuroimaging studies, where the use of deep learning (especially, image processing architectures) is notable for studies which report a combination of prediction and discovery.

Finally, we note that when an array of ML methods were used (e.g. testing and then selecting a best-performing classifier in either neuroimaging or survey data), with one exception 20 , there was no definition of what explainability meant and the authors deferred to the XAI method used. Thematically, almost all original research papers follow a pattern of describing why XAI is important, usually presenting a prima facie argument (e.g. that human operators need to be able to understand what the AI is delivering) with few explicitly addressing a definition with respect to the application domain or methods used. More often, rather than being explicitly defined—or addressing how the research delivers explainability—papers defer to methods (most commonly, feature importance) or assume XAI is conventional wisdom.

A framework for understandable AI/ML in mental health applications

Given the diversity of definitions of “explainability” we now describe a framework for “understandable AI/ML” for mental health research that centers transparency and interpretability—both concepts which possess more consistent meanings in the literature—and recalling our earlier definition, we propose understandability as the most concrete approximation 21 to the multifarious definitions and uses of the term “explainability”. To do this, we anchor our definitions to models that have intrinsic interpretability or are understandable by design (i.e. linear statistical models). A tutorial example (comparing a fully understandable linear model to an opaque neural network model) is given in Supplementary Information .

In Fig. 1 , we show the TIFU framework. An AI/ML algorithm takes some input and performs operations to derive a feature space which is the basis for downstream computations that implement the desired functionality e.g. classification, regression, function approximation, etc. The derived feature space is usually optimised to ensure the downstream task is tractable. If we denote the output of a model y , the multivariate input x and f ( x ) being some function mapping from inputs to the feature space (which may be a composition of many functions) and g ( f ( x )) be the downstream process (that operates on the feature space and may also be a non-trivial composition of functions) then:

Definition (Transparency). The inputs x and feature space f ( x ) of a model should either

stand in close correspondence to how humans understand the same inputs or,

relationships between inputs (and their representation in features space) should afford an interpretation that is clinically meaningful

For example, if the inputs to a model are A g e and performance on some cognitive task, T e s t S c o r e , then the model is feature transparent if:

trivially, the feature space is identical to the inputs, f ( x ) ≡  x ,

the feature space can be described e.g. as an explicit function of the inputs f ( x ) =  A g e  +  T e s t S c o r e 2 —in this example, the function may represent feature engineering that includes human-expert domain knowledge relevant to the application,

the feature space can be represented to a human operator such that differences and similarities between exemplars (i.e. two different patients) preserves or affords a clinical meaning e.g. exemplars with similar T e s t S c o r e values aggregate together even if they differ in A g e . In this example, let f ( x 1 ) represent an individual with a low T e s t S c o r e and younger A g e (a clinical group representing premature onset of a cognitive disorder) and f ( x 2 ) represent an individual with a low T e s t S c o r e and higher A g e (representing another clinical group)—if f ( ⋅ ) is a non-trivial function, we need to provide a mechanism that exposes why x 1 and x 2 are represented differently/similarly under f ( ⋅ ) consistent with a human expert’s differentiation of the two cases; an obvious method being distance-preserving mappings for unsupervised algorithms or conformity measures for e.g. supervised classification with deep learning 22 .

We need not commit to any one way of defining “relationships” between inputs—they could be probabilistic (different exemplars have similar probabilities of membership to components of a mixture model of the feature space), geometric (distances on some manifold representation) or topological (such as nearest-neighbour sets). It matters only that the feature space is represented in a way that aligns with the clinical problem/population (see for example, Supplementary Information Figure 2) .

Definition (Intepretable). For a model to be interpretable—akin to the concept of algorithmic transparency 21 —we require one or more of the following:

The function (computational processes) of g ( ⋅ ) can be articulated so the outputs can be understood as transformations of the inputs.

The structure (parameterisation) of g ( ⋅ ) can be described and affords a clinical interpretation.

The presentation of g ( ⋅ ) allows for a human operator to explore qualitative relationships between inputs and outputs (i.e. the behaviour of the model).

Clearly, criteria (a) will be difficult to achieve in all but the simplest cases (e.g. primitive arithmetic operations) and similarly, criteria (b) will be difficult to achieve for methods lacking the theoretical underpinning of e.g. linear statistical models. Consequently, criteria (c) is likely to be leveraged in many applications where g ( ⋅ ) is some non-trivial function of it’s inputs.

For example, logistic regression admits all three of the desiderata for interpretability as follows:

The computational processes (function) are: first compute a weighted sum of the inputs f ( x ) =  x ⊺ β e.g. representing the log odds of x being a positive case on the logit scale; then compute a “link” function that converts the unbounded weighted sum into a probability \(g(f({{{\bf{x}}}}))=1/(1+\exp (-f({{{\bf{x}}}})))\) .

The parameterisation (structure) β affords a direct interpretation as odds ratios for each of the inputs x i   ∈   x with respect to the output.

The presentation is straight-forwardly that Pr( y  = 1 ∣ x ) =  g ( f ( x ))—although we might consider a format established to be more compatible with clinician reasoning, e.g. natural frequencies instead of probability statements 23 , 24 .

The obvious stress-test for our definition of transparency and interpretability (to deliver an understandable model) are applications of deep learning. For example, in ref. 15 , the authors use convolutional networks to pre-process resting state fMRI data and then downstream, classify cases into those likely to have obsessive compulsive disorder. Their modelling uses three different architectures; two that operate on the fMRI data directly (where f ( ⋅ ) is composed of two layers of convolution, followed by max pooling and a linear output layer that implements supervised feature selection) and another where previously engineered, anatomically-parcellated classifiers provide a feature representation. The two architectures that make use of convolutional layers (at least, as they are presented 15 ) do not meet the criteria for transparency or intepretability. However, the third model (parcellation-based features) does meet the transparency criteria because for an individual patient, each anatomical-parcellation classifier delivers a feature value proportional to that brain region’s probability of being ‘pathological’ (e.g. being similar or different to the prototype for a disorder or healthy patient). Further, for the interpretability criteria, we conclude that as presented, although the upstream parcellation system meets criteria (a) and (b) overall, the results as presented in ref. 15 marry with the presentation criteria (c).

Presentation and clinical reasoning

In our definition of understandability (as transparency and interpretability) we rely heavily on a human operator being able to relate the behaviour of algorithms (and their inputs) to their everyday professional expertise. Our inclusion of “presentation” as a third component of interpretability is because we expect a model’s operation will be too complex. To this end, consistent with others 21 , we add that the model must present input/output relationships aligned with the cognitive strategies that clinicians use. We focus on abductive and inductive (in contrast to deductive) inference as the most applicable framework 25 , 26 , 27 , 28 . Some of the literature surveyed leverages user interface design to present the outputs of complex models a way familiar to clinicians 16 , 20 , 29 , 30 , 31 . Even for generalised linear models, clinicians may struggle to directly interrogate the structure (parameterisation) and function (computations) but of course, have recourse to interpretability afforded by the structure of these models (criteria b).

Interactive visualisation may allow clinicians to “probe” the model for changes in the probability of an outcome (e.g. a diagnosis) by manipulating the input features (such as presence/absence of symptoms) and this assists users to develop a qualitative understanding of the relationship between inputs/outputs. By analogy, nomograms 32 allow a user to visually compute the output of complex mathematical functions without access to explicit knowledge of the required operations (i.e. the function/computational processes). In the deep learning literature 33 , a similar idea estimates the change in a classifier’s output i.e, the change in g ( f ( x ))) for systematic changes in the input ( x ) 34 and similar perturbation techniques 35 can be applied to components of models (i.e. to f and g seperately, or if g is a non-trivial composition of functions). However, the focus of these predominantly engineering solutions is on image processing systems and there is a dearth of literature that explores the specific context of clinical reasoning e.g. how an AI might assist with diagnosis outside of the narrower but familiar domain of imaging.

The requirements for presentation will need alignment with the different use-cases for AI. With respect to the “Application” column of Table 1 : for discovery applications of AI, inductive reasoning allows us to use statistical or probabilistic information to generalise from examples; in Fig. 2 , if we know that 80% of people with psychosis (an hypothesis or diagnosis, D ) have abnormal beliefs (evidence, or signs/symptoms S ) then induction allows us to make generalisations about individuals with D given what we know about the relationship with S , or symbolically, D  →  S . Presentation as induction would be useful when dissecting disorder sub-types 36 and neuroscientific discovery 37 where dimensionality reduction and unsupervised clustering methods would align with an inductive presentation.

figure 2

Using the example of making diagnoses, the left panel shows an inductive inference using a statistical syllogism—a probable conclusion for an individual (in the example given, the probability an individual will experience abnormal belief symptoms, given they have a diagnosis of a psychotic disorder) is obtained by generalising from available data (i.e. the contingency table showing the proportions of patients with psychosis who have abnormal beliefs). In the right panel, abductive inference affords computing the best or most plausible explanation (i.e. a diagnosis of psychosis or depression) for a given observation (that a person has abnormal beliefs) using the available data (a contingency table for two diagnoses).

More relevant to decision making and prediction applications is abductive reasoning; the process of inferring which hypotheses are best supported given some evidence—for example, in diagnostic reasoning, we can consider evidence (as signs/symptoms, S ) and hypotheses as a number of candidate diagnoses D 1 ,  D 2 , … D n . We aim to infer which D i best accounts for the evidence S and this is compatible with conditional probability and Bayes theorem; that is, we seek Pr( D i ∣ S )  ∝  Pr( S ∣ D i )  ⋅  Pr( D i ). In contrast to induction, we are inferring S  →  D . Inductive inference differs from deductive inference because although the “direction” is D  →  S in deductive inference, the truth of D and S are absolute; for example, in Fig. 2 , deductive inference would assert that it is necessarily true that if a person has psychosis, they definitely have abnormal beliefs (cf. the probabilistic interpretation afforded by inductive reasoning). To re-use the example of making a diagnosis, it is clear that psychiatric diagnoses have “many-to-many” mappings with the underlying biology 38 , 39 , 40 , the probabilistic nature of psychiatric diagnosis (i.e. the mapping of signs/symptoms to diagnoses) has long been recognised 41 and consequently, influenced the dimensional characterisation of disorders 42 . Here, we suggest that an abductive presentation would be most suitable.

Conclusion and recommendations

We now describe the implications of both our survey of the literature on XAI in mental health and the proposed TIFU framework. Firstly, we note that the application of XAI in mental health were broadly prediction, discovery or a combination of the two. Second, we require understandability because clinical applications are high-stakes. Third, we expect that when we deploy AI tools, they should assist clinicians and not introduce further complexity. In our review of recent literature on the topic of explainable AI in mental health applications, we note that in 8 of 15 original research papers, prediction and discovery are considered together—using our framework, this would require separate consideration of the TIFU components with respect to each task (prediction and discovery) separately.

Our first recommendation is driven by the diversity of AI/ML techniques deployed in applications on clinical survey data e.g., refs. 20 , 31 , 43 —which here, means voluminous, tabulated data with high numbers of input (independent) variables and where there is no a priori data generating model or domain knowledge which enables human-expert feature selection/engineering. These applications were characterised by (a) the use of multiple AI/ML methods and their comparison to find the “best” model and (b) post hoc interrogation of the model (e.g. by feature importance methods) to provide a parsimonious summary of those features for the best performing model. The AI/ML methods are essentially being used to automate the exploration of data for signals that associate with an outcome of interest while simultaneously, delivering a functioning classifier that could be deployed to assist clinicians.

Recommendation One: When multiple AI/ML techniques (that are not transparent and interpretable) are used to discover which inputs are features reliably associated with an output of interest, the “discovered” feature/output associations should be then be tested by post hoc constructing a transparent and interpretable model which uses only those discovered features.

In essence, we are suggesting that the wholesale application of AI/ML methods should be seen as exploratory analyses, to be followed by constructing a transparent and interpretable model. For example, in ref. 43 the best performing classification method was shown to be XGBoost and post hoc analyses with SHAP values identifies a subset of inputs which have most utility as features for classifying whether an individual was likely to experience a change in mood state during the Covid pandemic lockdown. Our recommendation would mean constructing a model with the discovered features—clearly identifying the mapping from inputs to features—examining it’s performance and ensuring that the criteria for interpretability are met. A counter argument would be that this leads to circular analyses—or “double dipping” the data—leading to sampling bias in the interpretable model. This may be true, but it is a limitation of the approach in general because if the discovered features migrated into the interpretable model are robust, the understandable model should still perform when prospectively evaluated in another validation sample. This latter step then ensures that the model is valid, reliable and understandable which can only provide reassurance to clinicians using the system when it is deployed. This recommendation is similar to “knowledge distillation” 44 and methods have developed wholesale for extracting decision trees from deep learning models 45 .

Our next recommendation is driven by the observation (see Table 1 ) that deep multi-layer networks were used to implement classification as a downstream task and, essentially, supervised learning of a feature space representation for very-high dimensional inputs. In these cases, we can identify a partition between the upstream component that performs feature representation f ( ⋅ ) and the downstream task g ( ⋅ ).

Recommendation Two: When using high-volume, high-dimensional (multivariate) data, without a priori domain-specific constraints and instead, we wish to automate reducing the data to feature representations f essential for a downstream task g : when the methods used to implement f are neither transparent (data, features) or admit interpretability (function, structure) they should be engineered and then deployed as a separate component for use in interpretable methods for the downstream task g .

Essentially, we are recommending that when we rely on opaque models for processing high-volume/dimensional data, they should be treated as a pre-processing ‘module’ and the downstream task g that depends on the feature representation should be implemented using models that meet interpretability criteria. As an example, the anatomical parcellation model developed for identifying people with OCD 15 exemplifies this. Our recommendation would be that instead of using subsequent multiple layer networks for classifying OCD, a simpler interpretable model should be preferred because; then, the outputs of each anatomically-parcellated pre-processing ‘modules’ are transparent and the downstream classification task would be interpretable.

A counter argument might be that the upstream feature representation might not be compact enough, or, that the multiple layers in the downstream classifier are required to flexibly aggregate the feature representation for g ; if this is the case, then the model will necessarily remain opaque, lack understandability and may well be vulnerable to over-fitting both f and g and therefore, unlikely to be useful in high-stakes applications.

We have consistently described AI/ML models as being composed of an “upstream” component that delivers a feature representation, f ( ⋅ ), coupled to another “downstream” process, g ( ⋅ ), that uses the feature representations to perform e.g. prediction, discrimination/classification and so on. This may not be appropriate to all AI/ML methods—however, from our review of current XAI in mental health and psychiatry, this is how AI/ML methods are being used.

Our proposed TIFU framework simultaneously lowers our ambitions for what “explainability” is while emphasising and making concrete a definition that centers (a) computational processes and structures, (b) the presentation of outputs and (c) the way that data or inputs relate to the clinical domain. Our approach draws on principles in the general XAI literature—notably, ref. 2 and ref. 21 —extending these principles to specific considerations for psychiatry and mental health including inductive and abductive inference and the differing nature of understandability for prediction, discovery and decision-making applications. To conclude, our ambition for the TIFU framework is to improve the consistency and specificity of what we mean when we allude to “explainability” in research involving AI and ML for mental health applications.

Reporting summary

Further information on research design is available in the Nature Research Reporting Summary linked to this article.

Data availability

The simulated data presented is freely available at https://github.com/danwjoyce/explain-simple-models .

Code availability

The simulation code presented is freely available at https://github.com/danwjoyce/explain-simple-models .

Ghassemi, M., Oakden-Rayner, L. & Beam, A. L. The false hope of current approaches to explainable artificial intelligence in health care. Lancet Digit. Health 3 , e745–e750 (2021).

Article   Google Scholar  

Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1 , 206–215 (2019).

Shortliffe, E. H., Buchanan, B. G. & Feigenbaum, E. A. Knowledge engineering for medical decision making: a review of computer-based clinical decision aids. Proc. IEEE 67 , 1207–1224 (1979).

Fagan, L. M., Shortliffe, E. H. & Buchanan, B. G. Computer-based medical decision making: from MYCIN to VM. Automedica 3 , 97–108 (1980).

Google Scholar  

Fleiss, J. L., Spitzer, R. L., Cohen, J. & Endicott, J. Three computer diagnosis methods compared. Arch. Gen. Psychiatry 27 , 643–649 (1972).

Article   CAS   Google Scholar  

Ferrario, A., Loi, M. & Viganò, E. Trust does not need to be human: it is possible to trust medical AI. J. Med. Ethics 47 , 437–438 (2021).

Li, B. et al. Trustworthy AI: From Principles to Practices. ACM Comput. Surv. 55 , 46 (2023).

Steyerberg, E. W. Clinical Prediction Models 2nd edn (Springer, 2019).

Justice, A. C., Covinsky, K. E. & Berlin, J. A. Assessing the generalizability of prognostic information. Ann. Int. Med. 130 , 515–524 (1999).

Altman, D. G., Vergouwe, Y., Royston, P. & Moons, K. G. Prognosis and prognostic research: validating a prognostic model. BMJ 338 , b605 (2009).

Collins, G. S., Reitsma, J. B., Altman, D. G. & Moons, K. G. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (tripod): the tripod statement. J. Brit. Surg. 102 , 148–158 (2015).

Biran, O. & Cotton, C. Explanation and justification in machine learning: a survey. in IJCAI-17 Workshop on Explainable AI (XAI) , Vol. 8, 8–13 (2017).

Lipovetsky, S. & Conklin, M. Analysis of regression in game theory approach. Appl. Stoch. Models Bus. Ind. 17 , 319–330 (2001).

Ribeiro, M. T., Singh, S. & Guestrin, C. “Why should I trust you?” Explaining the predictions of any classifier. in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 1135–1144 (2016).

Kalmady, S. V. et al. Prediction of obsessive-compulsive disorder: importance of neurobiology-aided feature design and cross-diagnosis transfer learning. Biol. Psychiatry Cogn. Neurosci. Neuroimaging 7 , 735–746 (2021).

Bučková, B., Brunovský, M., Bareš, M. & Hlinka, J. Predicting sex from EEG: validity and generalizability of deep-learning-based interpretable classifier. Front. Neurosci. 14 , 589303 (2020).

Supekar, K. et al. Robust, generalizable, and interpretable artificial intelligence-derived brain fingerprints of autism and social communication symptom severity. Biol. Psychiatry 92 , 643–653 (2022a).

Supekar, K. et al. Deep learning identifies robust gender differences in functional brain organization and their dissociable links to clinical symptoms in autism. Br. J. Psychiatry 220 , 202–209 (2022b).

Al Zoubi, O. et al. Machine learning evidence for sex differences consistently influences resting-state functional magnetic resonance imaging fluctuations across multiple independently acquired data sets. Brain Connect . 12 , https://doi.org/10.1089/brain.2020.0878 (2021).

Byeon, H. Exploring factors for predicting anxiety disorders of the elderly living alone in south korea using interpretable machine learning: a population-based study. Int. J. Environ. Res. Public Health 18 , 7625 (2021).

Arrieta, A. B. et al. Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58 , 82–115 (2020).

Papernot, N. & McDaniel, P. Deep k-nearest neighbors: towards confident, interpretable and robust deep learning. Preprint at https://arxiv.org/abs/1803.04765 (2018).

Hoffrage, U. & Gigerenzer, G. Using natural frequencies to improve diagnostic inferences. Acad. Med. 73 , 538–540 (1998).

Gigerenzer, G., Gaissmaier, W., Kurz-Milcke, E., Schwartz, L. M. & Woloshin, S. Helping doctors and patients make sense of health statistics. Psychol. Sci. Public Interest 8 , 53–96 (2007).

Douven, I. in The Stanford Encyclopedia of Philosophy (ed. Zalta, E. N.) (Metaphysics Research Lab, Stanford University, 2021).

Rapezzi, C., Ferrari, R. & Branzi, A. White coats and fingerprints: diagnostic reasoning in medicine and investigative methods of fictional detectives. BMJ 331 , 1491–1494 (2005).

Altable, C. R. Logic structure of clinical judgment and its relation to medical and psychiatric semiology. Psychopathology 45 , 344–351 (2012).

Reggia, J. A., Perricone, B. T., Nau, D. S. & Peng, Y. Answer justification in diagnostic expert systems-Part I: Abductive inference and its justification. IEEE Transactions on Biomedical Engineering 263–267 (1985).

Ammar, N. & Shaban-Nejad, A. Explainable artificial intelligence recommendation system by leveraging the semantics of adverse childhood experiences: proof-of-concept prototype development. JMIR Med. Inform. 8 , e18752 (2020).

Jaber, D., Hajj, H., Maalouf, F. & El-Hajj, W. Medically-oriented design for explainable AI for stress prediction from physiological measurements. BMC Med. Inform. Decis. Mak. 22 , 38 (2022).

Jha, I. P., Awasthi, R., Kumar, A., Kumar, V. & Sethi, T. Learning the mental health impact of COVID-19 in the United States with explainable artificial intelligence: observational study. JMIR Ment. Health 8 , e25097 (2021).

Levens, A. S. Nomography (John Wiley and Sons, 1948).

Samek, W., Montavon, G., Vedaldi, A., Hansen, L. K. & Müller, K.-R. Explainable AI: Interpreting, Explaining and Visualizing Deep Learning , Vol. 11700 (Springer Nature, 2019).

Zintgraf, L. M., Cohen, T. S., Adel, T. & Welling, M. Visualizing deep neural network decisions: prediction difference analysis. Preprint at https://arxiv.org/abs/1702.04595 (2017).

Shahroudnejad, A. A survey on understanding, visualizations, and explanation of deep neural networks. Preprint at https://arxiv.org/abs/2102.01792 (2021).

Drysdale, A. T. et al. Resting-state connectivity biomarkers define neurophysiological subtypes of depression. Nat. Med. 23 , 28–38 (2017).

Vu, M.-A. T. et al. A shared vision for machine learning in neuroscience. J. Neurosci. 38 , 1601–1607 (2018).

Burmeister, M., McInnis, M. G. & Zöllner, S. Psychiatric genetics: progress amid controversy. Nat. Rev. Genet. 9 , 527–540 (2008).

Henderson, T. A. et al. Functional neuroimaging in psychiatry-aiding in diagnosis and guiding treatment. What the American Psychiatric Association does not know. Front. Psychiatry 11 , 276 (2020).

Murray, G. K. et al. Could polygenic risk scores be useful in psychiatry?: a review. JAMA Psychiatry 78 , 210–219 (2021).

Feighner, J. P. et al. Diagnostic criteria for use in psychiatric research. Arch. Gen. Psychiatry 26 , 57–63 (1972).

Kraemer, H. C., Noda, A. & O’Hara, R. Categorical versus dimensional approaches to diagnosis: methodological challenges. J. Psychiatr. Res. 38 , 17–25 (2004).

Ntakolia, C. et al. An explainable machine learning approach for COVID-19’s impact on mood states of children and adolescents during the first lockdown in greece. Healthcare 10 , 149 (2022).

Craven, M. & Shavlik, J. Extracting tree-structured representations of trained networks. in Advances in Neural Information Processing Systems Vol. 8 (1995).

Liu, X., Wang, X. & Matwin, S. Improving the interpretability of deep neural networks with knowledge distillation. in 2018 IEEE International Conference on Data Mining Workshops (ICDMW) , 905–912 (IEEE, 2018).

Chang, Y.-W., Tsai, S.-J., Wu, Y.-F. & Yang, A. C. Development of an Al-based web diagnostic system for phenotyping psychiatric disorders. Front. Psychiatry 11 , 542394 (2020).

Ben-Zion, Z. et al. Neural responsivity to reward versus punishment shortly after trauma predicts long-term development of posttraumatic stress symptoms. Biol. Psychiatry Cogn. Neurosci. Neuroimaging 7 , 150–161 (2022).

Smucny, J., Davidson, I. & Carter, C. S. Comparing machine and deep learning-based algorithms for prediction of clinical improvement in psychosis with functional magnetic resonance imaging. Hum. Brain Mapp. 42 , 1197–1205 (2021).

Mishra, S. et al. An explainable intelligence driven query prioritization using balanced decision tree approach for multi-level psychological disorders assessment. Front. Public Health 9 , 795007 (2021).

van Schaik, P., Peng, Y., Ojelabi, A. & Ling, J. Explainable statistical learning in public health for policy development: the case of real-world suicide data. BMC Med. Res. Methodol. 19 , 152 (2019).

Download references

Acknowledgements

D.W.J. and A.K. were supported in part by the NIHR AI Award for Health and Social Care (AI_AWARD02183); A.K. declares a research grant from GlaxoSmithKline unrelated to this work. The views expressed are those of the authors and not necessarily those of the NIHR, the University of Oxford or UK Department of Health. A.C. and K.A.S. are supported by the National Institute for Health Research (NIHR) Oxford Cognitive Health Clinical Research Facility. A.C. is supported by an NIHR Research Professorship (grant RP-2017-08-ST2-006). D.W.J. and A.C. are supported by the NIHR Oxford Health Biomedical Research Centre (grant BRC-1215-20005).

Author information

Authors and affiliations.

University of Oxford, Department of Psychiatry, Warneford Hospital, Oxford, OX3 7JX, UK

Dan W. Joyce, Andrey Kormilitzin, Katharine A. Smith & Andrea Cipriani

Institute of Population Health, Department of Primary Care and Mental Health, University of Liverpool, Liverpool, L69 3GF, UK

Dan W. Joyce

Oxford Precision Psychiatry Lab, NIHR Oxford Health Biomedical Research Centre, Warneford Hospital, Oxford, OX3 7JX, UK

Katharine A. Smith & Andrea Cipriani

Oxford Health NHS Foundation Trust, Warneford Hospital, Oxford, OX3 7JX, UK

You can also search for this author in PubMed   Google Scholar

Contributions

All four authors (D.W.J., A.K., K.A.S. and A.C.) reviewed the literature, conceived of the proposed framework and contributed to the manuscript writing and revisions. All authors approved the final manuscript.

Corresponding author

Correspondence to Dan W. Joyce .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary material, reporting summary, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Joyce, D.W., Kormilitzin, A., Smith, K.A. et al. Explainable artificial intelligence for mental health through transparency and interpretability for understandability. npj Digit. Med. 6 , 6 (2023). https://doi.org/10.1038/s41746-023-00751-9

Download citation

Received : 07 August 2022

Accepted : 10 January 2023

Published : 18 January 2023

DOI : https://doi.org/10.1038/s41746-023-00751-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Evaluating the clinical utility of an easily applicable prediction model of suicide attempts, newly developed and validated with a general community sample of adults.

  • Marcel Miché
  • Marie-Pierre F. Strippoli
  • Roselind Lieb

BMC Psychiatry (2024)

Automated mood disorder symptoms monitoring from multivariate time-series sensory data: getting the full picture beyond a single number

  • Filippo Corponi
  • Bryan M. Li
  • Antonio Vergari

Translational Psychiatry (2024)

Machine learning and the prediction of suicide in psychiatric populations: a systematic review

  • Alessandro Pigoni
  • Giuseppe Delvecchio
  • Paolo Brambilla

Diagnostic reasoning prompts reveal the potential for large language model interpretability in medicine

  • Thomas Savage
  • Ashwin Nayak
  • Jonathan H. Chen

npj Digital Medicine (2024)

Applying explainable artificial intelligence methods to models for diagnosing personal traits and cognitive abilities by social network data

  • Anastasia S. Panfilova
  • Denis Yu. Turdakov

Scientific Reports (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

artificial intelligence in mental health care research paper

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Artificial intelligence in mental healthcare: an overview and future perspectives

Affiliations.

  • 1 Quantitative Biomedical Research Center, Peter O'Donnell Jr. School of Public Health, The University of Texas Southwestern Medical Center, Dallas, Texas, United States.
  • 2 Program in Computational Biology and Bioinformatics, Yale University, New Haven, Connecticut, United States.
  • 3 Department of Mathemaical Sciences, The University of Texas at Dallas, Richardson, Texas, United States.
  • 4 Department of Bioinformatics, The University of Texas Southwestern Medical Center, Dallas, Texas, United States.
  • 5 Simmons Comprehensive Cancer Center, The University of Texas Southwestern Medical Center, Dallas, Texas, United States.
  • PMID: 37698582
  • PMCID: PMC10546438 (available on 2024-10-01 )
  • DOI: 10.1259/bjr.20230213

Artificial intelligence is disrupting the field of mental healthcare through applications in computational psychiatry, which leverages quantitative techniques to inform our understanding, detection, and treatment of mental illnesses. This paper provides an overview of artificial intelligence technologies in modern mental healthcare and surveys recent advances made by researchers, focusing on the nascent field of digital psychiatry. We also consider the ethical implications of artificial intelligence playing a greater role in mental healthcare.

PubMed Disclaimer

Conflict of interest statement

Competing interestsAuthor declear no competing interests

Similar articles

  • Telepsychiatry and other cutting-edge technologies in COVID-19 pandemic: Bridging the distance in mental health assistance. Di Carlo F, Sociali A, Picutti E, Pettorruso M, Vellante F, Verrastro V, Martinotti G, di Giannantonio M. Di Carlo F, et al. Int J Clin Pract. 2021 Jan;75(1):10.1111/ijcp.13716. doi: 10.1111/ijcp.13716. Epub 2020 Oct 13. Int J Clin Pract. 2021. PMID: 32946641 Free PMC article. Review.
  • Artificial Intelligence in Psychiatry. Briganti G. Briganti G. Psychiatr Danub. 2023 Oct;35(Suppl 2):15-19. Psychiatr Danub. 2023. PMID: 37800199
  • [Digital life in a networked world: opportunities and risks for psychiatry]. Meyer-Lindenberg A. Meyer-Lindenberg A. Nervenarzt. 2021 Nov;92(11):1130-1139. doi: 10.1007/s00115-021-01203-z. Epub 2021 Oct 14. Nervenarzt. 2021. PMID: 34648056 Free PMC article. Review. German.
  • Your Robot Therapist Will See You Now: Ethical Implications of Embodied Artificial Intelligence in Psychiatry, Psychology, and Psychotherapy. Fiske A, Henningsen P, Buyx A. Fiske A, et al. J Med Internet Res. 2019 May 9;21(5):e13216. doi: 10.2196/13216. J Med Internet Res. 2019. PMID: 31094356 Free PMC article. Review.
  • Commercial Use of Emotion Artificial Intelligence (AI): Implications for Psychiatry. Monteith S, Glenn T, Geddes J, Whybrow PC, Bauer M. Monteith S, et al. Curr Psychiatry Rep. 2022 Mar;24(3):203-211. doi: 10.1007/s11920-022-01330-7. Epub 2022 Feb 25. Curr Psychiatry Rep. 2022. PMID: 35212918 Review.
  • The forgotten half: addressing the psychological challenges of wives of individuals with alcohol use disorder (AUD) in low- and middle-income countries. Joseph AP, Babu A, Prakash LO. Joseph AP, et al. Front Psychiatry. 2024 Mar 20;15:1377394. doi: 10.3389/fpsyt.2024.1377394. eCollection 2024. Front Psychiatry. 2024. PMID: 38571999 Free PMC article. No abstract available.
  • AI in imaging and therapy: innovations, ethics, and impact - introductory editorial. Naqa IE, Drukker K. Naqa IE, et al. Br J Radiol. 2023 Oct;96(1150):20239004. doi: 10.1259/bjr.20239004. Br J Radiol. 2023. PMID: 38011226 No abstract available.
  • Rehm J, Shield KD. Global burden of disease and the impact of mental and addictive disorders. Curr Psychiatry Rep 2019; 21(): 10. doi: 10.1007/s11920-019-0997-0 - DOI - PubMed
  • American Psychiatric Association . What is Mental Illness Available from: https://www.psychiatry.org:443/patients-families/what-is-mental-illness
  • Vigo D, Thornicroft G, Atun R. Estimating the true global burden of mental illness. Lancet Psychiatry 2016; 3: 171–78. doi: 10.1016/S2215-0366(15)00505-2 - DOI - PubMed
  • Satiani A, Niedermier J, Satiani B, Svendsen DP. Projected workforce of psychiatrists in the United States: A population analysis. Psychiatr Serv 2018; 69: 710–13. doi: 10.1176/appi.ps.201700344 - DOI - PubMed
  • Li W, Yang Y, Liu Z-H, Zhao Y-J, Zhang Q, Zhang L, et al. . Progression of mental health services during the COVID-19 outbreak in China. Int J Biol Sci 2020; 16: 1732–38. doi: 10.7150/ijbs.45120 - DOI - PMC - PubMed

Publication types

  • Search in MeSH

Related information

Grants and funding.

  • U01 AI169298/AI/NIAID NIH HHS/United States
  • R01 DE030656/DE/NIDCR NIH HHS/United States
  • R01 GM140012/GM/NIGMS NIH HHS/United States
  • R01 GM141519/GM/NIGMS NIH HHS/United States
  • U01 CA249245/CA/NCI NIH HHS/United States
  • R35 GM136375/GM/NIGMS NIH HHS/United States

LinkOut - more resources

Full text sources.

  • Ovid Technologies, Inc.
  • Silverchair Information Systems
  • MedlinePlus Health Information

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

  • All topics »
  • Fact sheets
  • Feature stories
  • Publications
  • Questions & answers
  • Tools and toolkits
  • Coronavirus disease (COVID-19) pandemic
  • Ukraine emergency
  • Environment and health

Mpox (monkeypox)

artificial intelligence in mental health care research paper

  • Calls for experts
  • Initiatives

European Programme of Work

  • Sustainable Development Goals

The Pan-European Mental Health Coalition

  • Empowerment through Digital Health
  • The European Immunization Agenda 2030
  • Healthier behaviours: incorporating behavioural and cultural insights
  • Moving towards UHC
  • Protecting against health emergencies
  • Promoting health and well-being
  • News stories
  • Media releases
  • Photo stories
  • Questions and answers
  • Media Contacts

Newsletters

  • European Health Information Gateway
  • European health report
  • Core health indicators
  • WHO Immunization Data portal
  • Noncommunicable diseases (NCD) dashboard 
  • Events 
  • Teams »

Data and digital health

  • Policy & Governance f. Health through the Life Course
  • Groups and networks »
  • Health Evidence Network (HEN)

The European Health Report 2021 »

european health report 2021

  • Conflict in Israel and the occupied Palestinian territory
  • Armenian refugee health response
  • Climate crisis: extreme weather
  • Türkiye and Syria earthquakes
  • About health emergencies
  • Health emergencies newsletter 
  • Health emergencies list

artificial intelligence in mental health care research paper

  • Regional Director
  • Executive Council
  • Technical centres
  • Faces of WHO
  • Regional Committee for Europe
  • Standing Committee
  • Partnerships
  • Groups and networks
  • WHO collaborating centres

74th session of the WHO Regional Committee for Europe

74th session of the WHO Regional Committee for Europe

Artificial intelligence in mental health research: new WHO study on applications and challenges

Using artificial intelligence (AI) in mental health services and research has potential, but a new study finds significant shortcomings that may indicate overly accelerated promotion of new AI models that have yet to be evaluated as viable in the real world. 

How AI can support mental health services  

In 2021, over 150 million people in the WHO European Region were living with a mental health condition. Over the last few years, the COVID-19 pandemic has made matters worse. People have been less able to access services, and increases in stress, adverse economic conditions, conflict and violence have shown how vulnerable mental health can be. 

In parallel, AI has been giving rise to a revolution in medicine and health care. AI is seen as a novel tool in the planning of mental health services, as well as in identifying and monitoring mental health problems in individuals and populations. AI-driven tools can use digitized health-care data – available in a range of formats including electronic health records, medical images and hand-written clinical notes – to automate tasks, support clinicians and deepen understanding of the causes of complex disorders.  

WHO/Europe’s “Regional digital health action plan for the WHO European Region 2023–2030”, launched in September 2022, also recognizes the need for innovation in predictive analytics for better health through big data and AI.  

“Given the increasing use of AI in health care, it is relevant to assess the current status of the application of AI for mental health research to inform about trends, gaps, opportunities and challenges,” says Dr David Novillo-Ortiz, Regional Adviser on Data and Digital Health at WHO/Europe, and co-author of the study. 

Challenges  

“Methodological and quality flaws in the use of artificial intelligence in mental health research: a systematic review”, authored by experts from the Polytechnic University of Valencia, Spain, and WHO/Europe, looked at the use of AI for mental health disorder studies between 2016 and 2021.   

“We found that AI application use in mental health research is unbalanced and is mostly used to study depressive disorders, schizophrenia and other psychotic disorders. This indicates a significant gap in our understanding of how they can be used to study other mental health conditions,” adds Dr Ledia Lazeri, Regional Advisor for Mental Health at WHO/Europe. 

Because of the possibilities AI offers, policy-makers may gain insight into more efficient strategies to promote health and the current state of mental disorders. However, AI often involves complex use of statistics, mathematical approaches and high-dimensional data that could lead to bias, inaccurate interpretation of results and over-optimism of AI performance if it is not adequately handled. The study found significant flaws in how the AI applications process statistics, infrequent data validation and little evaluation of the risk of bias.  

In addition, several other areas cause concern, such as the lack of transparent reporting on AI models, which undermines their replicability. The study found that data and models mostly remain private, and there is little collaboration between researchers.  

“The lack of transparency and methodological flaws are concerning, as they delay AI’s safe, practical implementation. Also, data engineering for AI models seems to be overlooked or misunderstood, and data is often not adequately managed. These significant shortcomings may indicate overly accelerated promotion of new AI models without pausing to assess their real-world viability,” explains Dr Novillo-Ortiz. 

“Artificial intelligence stands as a cornerstone of the upcoming digital revolution. In this study, we had a glimpse of what is to come in the next few years and will drive health-care systems to adapt their structures and procedures to advance in the provision of mental health services,” adds Antonio Martinez-Millana, Assistant Professor at the Polytechnic University of Valencia, and co-author of the study. 

Select study results were presented at an event organized by WHO/Europe on 7 December 2022. Entitled “Big data analytics and AI in mental health,” the event brought together experts from across the European Region to discuss how to realistically use AI models in planning mental health services, as well as safety and success factors, such as involving people with mental health conditions in the development process.

Regional digital health action plan for the WHO European Region 2023–2030

Digital health

Big Data analytics and artificial intelligence in mental health

Methodological and Quality Flaws in the Use of Artificial Intelligence in Mental Health Research: Systematic Review

  • Artificial Intelligence

Artificial Intelligence for Chatbots in Mental Health: Opportunities and Challenges

  • August 2021
  • In book: Multiple Perspectives on Artificial Intelligence in Healthcare (pp.115-128)

Kerstin Denecke at Bern University of Applied Sciences

  • Bern University of Applied Sciences

Alaa Ali Abd-alrazaq at Weill Cornell Medicine-Qatar

  • Weill Cornell Medicine-Qatar

Mowafa Househ at Hamad bin Khalifa University

  • Hamad bin Khalifa University

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations
  • Emrah Atılgan
  • Jinxiao Yang
  • Shixuan Song
  • Seungyeon Seo
  • Gary Geunbae Lee
  • Swarali Kulkarni
  • Erum Parkar
  • Rutuja Lonkar

Sheetal Kusal

  • William J. Triplett
  • Remya Mavila
  • Sugam Jaiswal
  • Raghav Naswa
  • Brigitte Hoyer Gosselink
  • Kate Brandt
  • Marian Croak
  • James Manyika

Mashrur Rashik

  • Mahmood Jasim

Kostiantyn Kucher

  • Sarah Bouhouita-Guermech
  • Hazar Haidar

Alaa Ali Abd-alrazaq

  • Mohannad Alajlani

Mowafa Househ

  • J MED INTERNET RES
  • B WORLD HEALTH ORGAN

David D. Luxton

  • INT J MED INFORM

Ali Abdallah Alalwan

  • Beate Wilken
  • Sayan Vaaheesan
  • Aaganya Arulnathan
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up
  • Open access
  • Published: 02 September 2024

Integrating machine learning and artificial intelligence in life-course epidemiology: pathways to innovative public health solutions

  • Shanquan Chen   ORCID: orcid.org/0000-0002-4724-4892 1   na1 ,
  • Jiazhou Yu   ORCID: orcid.org/0000-0002-9249-2721 2   na1 ,
  • Sarah Chamouni 1 ,
  • Yuqi Wang 3 &
  • Yunfei Li   ORCID: orcid.org/0000-0002-0542-4641 4  

BMC Medicine volume  22 , Article number:  354 ( 2024 ) Cite this article

1 Altmetric

Metrics details

The integration of machine learning (ML) and artificial intelligence (AI) techniques in life-course epidemiology offers remarkable opportunities to advance our understanding of the complex interplay between biological, social, and environmental factors that shape health trajectories across the lifespan. This perspective summarizes the current applications, discusses future potential and challenges, and provides recommendations for harnessing ML and AI technologies to develop innovative public health solutions. ML and AI have been increasingly applied in epidemiological studies, demonstrating their ability to handle large, complex datasets, identify intricate patterns and associations, integrate multiple and multimodal data types, improve predictive accuracy, and enhance causal inference methods. In life-course epidemiology, these techniques can help identify sensitive periods and critical windows for intervention, model complex interactions between risk factors, predict individual and population-level disease risk trajectories, and strengthen causal inference in observational studies. By leveraging the five principles of life-course research proposed by Elder and Shanahan—lifespan development, agency, time and place, timing, and linked lives—we discuss a framework for applying ML and AI to uncover novel insights and inform targeted interventions. However, the successful integration of these technologies faces challenges related to data quality, model interpretability, bias, privacy, and equity. To fully realize the potential of ML and AI in life-course epidemiology, fostering interdisciplinary collaborations, developing standardized guidelines, advocating for their integration in public health decision-making, prioritizing fairness, and investing in training and capacity building are essential. By responsibly harnessing the power of ML and AI, we can take significant steps towards creating healthier and more equitable futures across the life course.

Peer Review reports

Life-course epidemiology is a field of study that examines the long-term effects of biological, behavioral, and social exposures during gestation, childhood, adolescence, and adulthood on the development of chronic diseases later in life [ 1 ]. This approach recognizes that health and disease are influenced by the complex interplay of various factors across an individual’s life span and that the timing and duration of these exposures can have critical implications for future health outcomes [ 1 ].

The importance of life-course epidemiology in understanding chronic diseases lies in its ability to provide a comprehensive framework for investigating the origins and trajectories of these conditions. As defined by Elder and Shanahan, the life-course approach is based on five key principles: lifespan development, agency, time and place, timing, and linked lives (Fig.  1 ) [ 2 ]. Lifespan development recognizes that human development and aging are ongoing processes that occur throughout an individual’s life, rather than being limited to specific stages. Agency acknowledges that individuals have the capacity to make choices and take actions that shape their lives, albeit within the constraints of their environmental, social, and historical contexts. The principle of time and place emphasizes that each person’s life course is embedded within and influenced by the specific historical era and location in which they live. Timing is crucial, as the same events and behaviors can have varying effects depending on when they occur in an individual’s life course. Finally, linked lives underscores the interconnectedness of human experiences, as people influence each other through shared and interdependent relationships. By applying these principles, researchers can identify sensitive periods and critical windows during which interventions may be most effective in preventing or mitigating the risk of chronic diseases.

figure 1

Five principles of the life-course approach proposed by Elder and Shanahan

In recent years, machine learning (ML) and artificial intelligence (AI) have emerged as powerful tools in epidemiological research. ML is a subfield of AI which refers to the ability of computers to draw conclusions (ie, learn) from data without being directly programmed and builds from traditional statistical methods [ 3 ]. These techniques offer the ability to handle vast amounts of complex, high-dimensional data, identify intricate patterns and associations, and develop predictive models that can inform personalized interventions and public health strategies. ML and AI can integrate multiple data types, such as electronic health records (EHRs), genomic data, and environmental exposures, to provide a more comprehensive understanding of the factors contributing to health outcomes across the life course. Moreover, advanced ML and AI techniques can analyze multimodal data, including structured and unstructured text, audio, image, and video data, enabling the examination of diverse information sources such as MRI scans, X-rays, recordings of heartbeats or respiratory sounds, and physical activity and behavioral patterns. Furthermore, these approaches can enhance causal inference methods, allowing researchers to better estimate the effects of exposures on health outcomes in observational settings.

The integration of ML and AI techniques in life-course epidemiology has the potential to revolutionize our understanding of the complex determinants of diseases and inform the development of more targeted and effective public health interventions. By leveraging the power of these innovative tools, researchers can uncover novel risk factors, identify critical windows for intervention, and predict individual disease trajectories with greater precision.

This perspective aims to summarize the current applications and discuss future potential and challenges of integrating ML and AI in life-course epidemiology research, and to discuss the applications of such technologies to advance public health solutions. This perspective will also discuss the benefits and limitations of current applications, highlight opportunities for identifying sensitive periods, modeling complex interactions, predicting disease risk trajectories, and enhancing causal inference methods. Further, it will address the challenges and ethical considerations associated with the use of ML and AI in life-course research, and provide recommendations for future directions.

Current applications of ML and AI in epidemiology

ML and AI have been increasingly applied in various areas of epidemiological studies, demonstrating their potential to advance our understanding of health and diseases. These techniques offer several key benefits, including the ability to handle large, complex datasets, identify intricate patterns and associations, and develop accurate predictive models.

One notable application of ML and AI in epidemiology is in the prediction of cardiovascular disease risk. Researchers have developed ML models that integrate clinical, genetic, and lifestyle factors to predict an individual’s risk of developing cardiovascular disease [ 4 , 5 , 6 , 7 ]. For example, a study utilized 216,152 retinal photographs from datasets in South Korea, Singapore, and the United Kingdom to train and validate deep learning algorithms [ 6 ]. The retinal photograph-derived coronary artery calcium scores were found to be comparable to those measured by CT scans [ 6 ]. Ward et al. (2020) used EHRs of 262,923 individuals to train and validate ML models, demonstrating performance that was comparable to or better than traditional equations for atherosclerotic cardiovascular disease risk [ 7 ].

Another area where ML and AI have shown promise is the early detection and prognosis of cancer. These techniques have been applied to predict cancer prognosis and estimate treatment response based on genomic and clinical data [ 8 , 9 , 10 , 11 , 12 , 13 , 14 ]. For instance, Lu et al. (2021) developed a deep learning network to predict early on-treatment response in metastatic colorectal cancer, which outperformed traditional methods [ 13 ].

ML and AI have also been employed to predict the onset and progression of neurodegenerative diseases, such as Alzheimer’s disease. By integrating neuroimaging, genetic, and clinical data, researchers have developed models that can identify individuals at high risk of developing Alzheimer’s disease [ 15 , 16 , 17 , 18 ]. Bhagwat et al. (2018) used a neural-network algorithm to predict the conversion from mild cognitive impairment to Alzheimer’s disease with as high as 0.90 accuracy [ 18 ].

In the realm of infectious diseases, ML and AI have been applied to predict disease outbreaks and identify high-risk areas [ 19 , 20 ]. Bengtsson et al. (2015) utilized mobile phone data and machine learning techniques to predict the spatial spread of cholera in Haiti following the 2010 earthquake [ 20 ]. Similarly, these techniques have been used to assess the health impacts of environmental exposures, such as air pollution, by estimating daily pollutant concentrations and providing high-resolution exposure assessments for epidemiological studies [ 21 ]. Moreover, Odlum and Yoon (2015) leveraged natural language processing (NLP) techniques on data extracted from the social media platform Twitter to develop a real-time model for Ebola outbreak surveillance during the early stage of the 2014 epidemic [ 22 ]. Their study showcased the potential of applying advanced computational methods to unconventional data sources for enhanced disease monitoring and early detection.

Furthermore, ML and AI have been employed to investigate the social determinants of health and identify populations at high risk of adverse health outcomes [ 23 ]. By analyzing EHRs and integrating data on social and environmental factors, researchers have developed models that predict an individual’s risk of experiencing health disparities or poor health outcomes [ 24 ].

There are cases where ML and AI models in epidemiology have been successfully implemented to assist resource allocation and decision-making in practice. During the COVID-19 pandemic when the healthcare systems were strained by increased healthcare demand, clinicians in emergency departments faced significant challenges in patient disposition decision based on the patient’s initial symptoms and limited information. In response, Hinson et al. (2022) developed a ML algorithm to predict near-term clinical deterioration in emergency patients with real-time EHR data [ 25 ]. This tool was rapidly integrated into clinical practice to support care decisions within the Johns Hopkins Health System, contributing to more consistent and reliable disposition decision and improved bed allocation during the pandemic [ 26 ]. To inform resource allocation and enhance precision medicine in cardiovascular diseases, Ye et al. (2018) developed and validated a ML model on EHRs of more than 1.5 million individuals to predict the risk of incidence essential hypertension within the next year [ 27 ]. Demonstrating excellent performance, this model has been deployed in Maine, the United States, to assist healthcare providers in identifying high-risk populations and promoting individualized treatment [ 27 ].

In the end, ML and AI can also contribute to causal inference by identifying potential causal pathways and controlling for confounding factors in observational studies. For example, Kang et al. (2021) utilized a deep learning-based causal inference framework to estimate the causal effect of air pollution on COVID-19 severity while adjusting for confounding factors such as socioeconomic status and weather conditions [ 28 ]. Similarly, Chu et al. (2020) proposed an adversarial deep treatment effect prediction (ADTEP) model to predict treatment effects using heterogeneous EHR data [ 29 ].

In summary, the application of ML and AI in epidemiology offers several key benefits, including:

Handling large, complex datasets: ML and AI can process vast amounts of high-dimensional data, making them valuable tools for extracting meaningful insights from diverse data sources.

Identifying complex patterns: These techniques can uncover intricate, non-linear relationships between exposures and health outcomes, which may not be easily identified using traditional statistical methods.

Integrating multiple and multimodal data types: ML and AI can incorporate data from various sources, such as EHRs, genomic data, and environmental exposures, as well as structured and unstructured text, audio, image, and video data, to provide a more comprehensive understanding of the factors influencing health outcomes.

Improving predictive accuracy: These approaches often achieve higher predictive accuracy than traditional methods, particularly when dealing with complex datasets, enabling the development of more precise risk prediction models.

Enhancing causal inference: While primarily used for prediction, ML and AI can also contribute to causal inference by identifying potential causal pathways and controlling for confounding factors in observational studies.

By leveraging these benefits, the application of ML and AI in epidemiology has the potential to advance our understanding of disease risk factors, improve early detection and prognosis, and, thereby, inform targeted interventions to promote population health.

Opportunities for ML and AI in life-course epidemiology

In life-course epidemiology that considers long-term effects of biological, behavioral, and social exposures during gestation, childhood, adolescence, and adulthood, ML and AI offer numerous opportunities by enabling researchers to identify sensitive periods, model complex interactions, predict disease risk trajectories, and enhance causal inference methods.

Identifying sensitive periods and critical windows for intervention

ML and AI can help identify sensitive periods and critical windows for intervention by analyzing longitudinal data on growth and development of exposure and health outcomes. Unsupervised learning techniques, such as clustering and latent class analysis, can uncover distinct subgroups of individuals with similar developmental trajectories, which may inform the timing of interventions [ 30 , 31 ]. Additionally, ML and AI allow for integration of multiple data types, including EHRs, genomic data, and environmental exposures, and of multimodal data, such as kinds of information from individual’s various types of records, thus providing a comprehensive perspective of the determinants of health outcomes across the different stages of life course [ 32 ]. Furthermore, potential causal pathways and mechanisms underlying the associations between exposures during critical windows of early life and later health outcomes can be better established by applying causal discovery algorithms or Mendelian randomization techniques [ 33 ].

Modeling complex interactions between biological, social, and environmental factors

ML and AI techniques, such as deep learning and agent-based modeling, can capture the complex, non-linear associations between multiple risk factors and health outcomes across the life course. These approaches can help researchers understand how individual-level exposures and experiences at different life stages interact to shape population-level patterns of health and disease. For example, deep learning algorithms can model the hierarchical and temporal relationships between genetic susceptibility, early life adversity, and adult lifestyle factors to predict the risk of developing chronic diseases. Agent-based models can simulate the spread of infectious diseases through a population, taking into account individual susceptibility, contact patterns, and environmental conditions [ 34 ].

Predicting individual and population-level disease risk trajectories

ML and AI can be used to develop personalized risk prediction models that estimate an individual’s likelihood of developing a particular disease based on their unique combination of risk factors and exposures across the lifespan. By effectively combining data from various sources such as genomic data, EHRs, and lifestyle factors, these models can provide accurate prediction of disease risks at different stages at an individual’s lifespan. At the population level, ML and AI can identify high-risk subgroups and distinct disease trajectories associated with specific combinations of early life exposures, socioeconomic factors, and health behaviors [ 4 , 9 , 16 , 30 , 31 ]. This information can guide the development of targeted interventions and policies to prevent and manage chronic diseases.

Enhancing causal inference methods in observational studies

ML and AI techniques can strengthen causal inference methods in life-course epidemiology by helping researchers adjust for confounding factors and estimate causal effects in observational studies. Propensity score methods, which estimate the probability of an individual receiving a particular treatment or exposure based on their observed characteristics, can be enhanced using ML algorithms to more accurately balance the distribution of potential confounders between exposed and unexposed groups [ 35 ]. Instrumental variable methods, which use factors associated with the exposure but not the outcome to estimate causal effects, can be improved by using ML to identify and validate potential instrumental variables [ 36 ]. Additionally, advanced ML techniques, such as causal forests, can directly estimate heterogeneous treatment effects and minimize bias in observational studies [ 37 , 38 , 39 ].

By harnessing the power of ML and AI in these key areas, life-course epidemiology can gain novel insights into the complex determinants of health and disease across the lifespan, ultimately informing the development of more effective, personalized interventions and public health strategies.

Framework for harnessing ML and AI technologies to advance public health solutions

The five principles outlined by Elder and Shanahan offer a robust conceptual framework for comprehending the intricate and ever-changing aspects of health and disease throughout an individual’s life course [ 2 ]. These principles also serve as a foundation for harnessing the potential of ML and AI to identify previously unknown risk factors, predict disease progression, and guide the development of targeted interventions. By leveraging the five principles of life-course research, we discuss the applications of ML and AI in life-course epidemiology based on the framework proposed by Elder and Shanahan [ 2 ].

Lifespan development: This principle emphasizes that human development and aging are lifelong processes, highlighting the importance of examining the cumulative effects of exposures and experiences across the entire life course. ML and AI techniques, such as deep learning and longitudinal modeling, can analyze large, complex datasets spanning multiple life stages to identify patterns and trajectories of health and disease over time [ 30 , 31 ].

Agency: This principle recognizes that individuals have the capacity to make choices and take actions that shape their health and well-being within the constraints of their social and environmental contexts. ML and AI techniques, such as decision trees and reinforcement learning, can model the complex interactions between individual agency and social and environmental factors to identify key intervention points for promoting health and preventing disease [ 40 ].

Time and place: This principle emphasizes that every individual life course is embedded within and influenced by its specific historical and geographic context. ML and AI techniques, such as spatial modeling and time series analysis, can analyze the effects of place and time on health outcomes and identify key contextual factors that may influence the effectiveness of public health interventions [ 39 , 41 ]. Recurrent neural networks (RNNs) and long short-term memory (LSTM) networks have been extensively employed for temporal data analysis, enabling the capture of dependencies and patterns over time [ 42 , 43 , 44 ]. Particularly, in recent years, novel architectures like Transformers have emerged and gained prominence due to their capacity to handle long-range dependencies and facilitate parallel processing. Transformers, exemplified by the Bidirectional Encoder Representations from Transformers (BERT) model, have exhibited superior performance across a wide range of natural language processing tasks and have been successfully adapted for time series analysis [ 45 , 46 ].

Timing: This principle recognizes that the same events and experiences can have different effects on health depending on when they occur in the life course. ML and AI techniques, such as survival analysis (e.g. Survival Forest) and Bayesian networks, can model the time-varying effects of exposures on health outcomes and identify optimal timing and targeting of interventions [ 37 , 38 ].

Linked lives: This principle emphasizes that individuals are embedded within social networks and relationships that shape their exposures, behaviors, and outcomes. ML and AI techniques, such as social network analysis and agent-based modeling, can model the complex interactions between individuals and their social environments to identify key leverage points for interventions that promote health and well-being across communities [ 40 , 47 ]. Moreover, large language models (LLMs) can be integrated into this principle to analyze social media data and patient-generated content, providing insights into the social and environmental factors influencing health outcomes across communities and networks [ 23 , 48 , 49 , 50 , 51 , 52 ].

The power of ML and AI techniques also enable itself to integrate multiple principles of the life-course approach simultaneously, enabling researchers to develop more comprehensive models of health trajectories. For instance, causal inference methods enhanced by ML, such as causal forests, can simultaneously address multiple principles by estimating heterogeneous treatment effects across different life stages, social contexts, and individual characteristics [ 37 , 38 , 39 ].

Challenges and ethical considerations

The integration of ML and AI in life-course epidemiology presents several challenges and ethical considerations that must be addressed to ensure the responsible and effective use of these technologies.

Data quality, harmonization, and integration

One major challenge is ensuring the quality, harmonization, and integration of data across multiple cohorts and sources [ 53 ]. Some data sources used for training ML models, such as EHRs collected for administrative purposes, might not be gathered with the necessary frequency, granularity, or bandwidth that align with information needs of science and learning, and may therefore present challenges in generating accurate and reliable algorithms [ 54 ]. Models trained on small sample sizes or data of suboptimal quality involving missing values, inaccuracies, and inconsistencies can lead to unreliable predictions and biased outcomes. Unlike certain health specialties such as dermatology or ophthalmology, where ML and AI have been successfully adopted due to their reliance on visual evaluation and pattern recognition [ 11 ], the application of ML and AI in epidemiology presents unique challenges. Life-course studies often involve data collected using different methods, at different time points, and from diverse populations. Ensuring the comparability and interoperability of these data is crucial for developing robust and generalizable ML and AI models. This requires close collaboration between researchers, data managers, and IT professionals to establish common data standards and protocols. Furthermore, life-course studies often involve factors with complex interactions and dynamic nature, as well as social phenomena that are inherently difficult to quantify and model. When dealing with dynamic variables, it is essential to retrain and reevaluate the models to account for new trends and changes, requiring ongoing monitoring and updates to maintain accuracy and relevance.

Interpretability and explainability

Another significant challenge is the interpretability and explainability of ML and AI models. As these algorithms become increasingly complex, it can be difficult to understand how they arrive at their predictions or decisions. This “black box” nature of some ML models raises concerns about their transparency and accountability, particularly in the context of public health interventions that can have far-reaching consequences [ 55 , 56 ]. Researchers must strive to develop models that are not only accurate but also interpretable, allowing for clear communication of their underlying logic and limitations to policymakers, healthcare providers, and the public. A case study indicated that when dealing with complex social factors, integrating predictive ML models with explanatory models can enhance understanding and prediction of outcomes [ 57 ].

Bias and generalizability

Bias and generalizability are critical issues in the application of ML and AI to life-course epidemiology. If the training data used to develop these models are not representative of the broader population or if they contain historical biases, the resulting algorithms may perpetuate or even amplify these biases [ 58 ]. Results generated from these algorithms can lead to unintended consequences, such as the exacerbation of health disparities or the misallocation of resources. For example, EHR data are often a complex product of biological, socio-economic conditions as well as prior practices of providers and health systems [ 54 ]. Researchers must be vigilant in identifying and mitigating potential sources of bias in different data sources to ensure that their models are equitable and generalizable to diverse populations.

Integration with domain knowledge

While ML and AI can identify novel patterns and associations in data, they do not necessarily provide insights into the underlying biological or clinical mechanisms. Integrating ML and AI findings with existing domain knowledge and expert interpretation is essential for ensuring the validity and relevance of the results [ 59 , 60 ]. This requires close collaboration between data scientists, epidemiologists, and clinical experts, as well as a willingness to iterate between data-driven and hypothesis-driven approaches.

Privacy and ethical concerns

The use of sensitive data in life-course studies poses significant privacy concerns and ethical challenges [ 61 , 62 ]. These studies often involve the collection and analysis of highly personal information, such as genetic data, medical records, and social media activity. Ensuring the security and confidentiality of these data is paramount, requiring robust data governance frameworks and strict adherence to ethical guidelines. Researchers must also grapple with the potential unintended consequences of their work, such as the stigmatization of certain groups or the misuse of predictive models for discriminatory purposes. Strategies such as adopting synthetic data for training models provide new opportunities to improve the diversity and robustness of ML and AI models by reducing patient privacy concerns and facilitating data sharing while maintaining the original distribution of data [ 63 ].

Computational resources and expertise

Applying ML and AI techniques to large-scale epidemiological data can be computationally intensive and require specialized expertise in data science and programming. Access to high-performance computing resources and qualified personnel may be a barrier for some research groups, particularly in low- and middle-income settings. Building capacity and infrastructure for ML and AI in epidemiological research is an important challenge that requires ongoing investment and support.

Potential overreliance on ML and AI

While ML and AI offer significant opportunities for advancing research, clinical practice, and policymaking, it is crucial to recognize that not all ML and AI models outperform traditional models in healthcare and public health areas [ 64 , 65 ]. A salient illustration of this phenomenon is the “Fragile Families Challenge,” wherein diverse state-of-the-art ML models were employed to predict individual life outcomes, including psychological and socio-economic status [ 66 ]. The resultant performance exhibited only marginal improvement over simple benchmark models [ 66 ]. This limited prediction value highlights the potential limitations of ML models when applied to complex social phenomena, contrasting with their success in physical and biological contexts [ 66 , 67 ]. Prior to the adoption of ML and AI technologies, particularly in applied domains, it appears requisite to conduct a critical evaluation of these techniques vis-à-vis traditional models. This is especially pertinent in scenarios where reliable or superior alternatives are extant [ 64 ]. While a consensus has yet to emerge, such evaluation seems essential to preclude unnecessary investment in these sophisticated models and to mitigate excessive consumption of finite computational resources.

Despite the demonstrated methodological efficacy of ML and AI models, it is imperative to acknowledge that their implementation may not necessarily yield net positive outcomes for patients. A systematic review of 65 randomized controlled trials evaluating AI-based prediction tools revealed that approximately 40% of the tools, which had previously exhibited satisfactory performance in observational model development or validation studies, failed to demonstrate statistically significant clinical benefits for patients when compared to standard clinical care protocols [ 68 ]. The dynamic nature of clinical practice, fluctuations in the prevalence of comorbidities, and various socio-environmental factors can precipitate shifts in the distribution of patient characteristics. Consequently, these changes necessitate the periodic retraining and re-evaluation of AI systems to maintain their relevance and efficacy [ 69 , 70 ].

Addressing these challenges and ethical considerations will require ongoing collaboration and dialogue among researchers, policymakers, and community stakeholders. It will be essential to develop guidelines and best practices for the responsible use of ML and AI in life course epidemiology, ensuring that these technologies are applied in a manner that is transparent, accountable, and aligned with public values and priorities. By proactively addressing these issues, we can harness the power of ML and AI to advance our understanding of health and disease across the life course while safeguarding the rights and wellbeing of individuals and communities.

Future directions and recommendations

To fully realize the potential of ML and AI in life-course epidemiology and advance public health solutions, we discuss the following recommendations for future research and practice:

Foster interdisciplinary collaborations

Collaboration between epidemiologists, data scientists, and public health professionals is crucial for the successful integration of ML and AI in life-course research. These collaborations will enable the exchange of knowledge, skills, and expertise necessary to develop and apply cutting-edge ML and AI techniques to complex life-course data. Epidemiologists contribute a deep understanding of the biological, social, and environmental factors influencing health trajectories, while data scientists bring advanced computational and analytical skills. Public health professionals provide invaluable insights into the practical implications and translational potential of research findings. By working together, these multidisciplinary teams can drive innovation, uncover novel insights, and ultimately improve population health outcomes.

Develop standardized guidelines and best practices

Establishing standardized guidelines and best practices for using ML and AI in life-course research is essential to ensure the reliability, reproducibility, and ethical application of these technologies. Clear protocols should be developed for data collection, preprocessing, and analysis, as well as guidelines for model development, validation, and reporting. These standards should be created through a collaborative process involving researchers, professional societies, and other stakeholders, drawing on existing best practices in epidemiology, data science, and bioethics. By promoting transparency, consistency, and rigor in the use of ML and AI, the credibility and impact of life course research can be enhanced. In the meantime, it is important to be aware of the limits of ML and AI applications. Guidelines should be formulated to instruct the necessity and appropriateness of adopting ML and AI in different areas.

Advocate for the integration of ML and AI in public health decision-making

Translating research findings into actionable policies and interventions is the key to realizing the full potential of ML and AI in life-course epidemiology. This requires close collaboration between researchers, policymakers, and community stakeholders to ensure that ML and AI models are developed and applied in a manner that addresses real-world health challenges and promotes health equity [ 71 ]. In order to generate results that can translate to advances that promote population health and health systems, the learning initiatives must align with questions that impact the important aspects of clinical practice or health-related decision-making [ 54 ]. Researchers should strive to communicate their findings in a clear, accessible language and engage with decision-makers and the public to build trust and understanding of these technologies. Policymakers, in turn, should invest in the infrastructure and resources necessary to support the use of ML and AI in public health, including data systems, computational tools, and workforce development. By working together, the power of ML and AI can be harnessed to inform evidence-based policies and interventions that improve health outcomes across the life course.

Prioritize equity and fairness in ML and AI applications

As ML and AI technologies become increasingly integrated into life-course research and public health practice, it is crucial to prioritize equity and fairness in their development and application [ 58 , 72 ]. Researchers should assess the data comprehensively given the specific purposes, while actively work to identify and mitigate potential sources of bias in their data and models, ensuring that the benefits of these technologies are distributed equitably across diverse populations. This may involve developing new methods for bias detection and correction, as well as engaging with communities and stakeholders to understand their needs and concerns. Policymakers and funding agencies should also prioritize research and initiatives that focus on addressing health disparities and promoting health equity through the use of ML and AI.

Invest in training and capacity building

To fully capitalize on the potential of ML and AI in life-course epidemiology, it is essential to invest in training and capacity building for researchers, public health professionals, and policymakers. This may involve developing new educational programs and curricula that integrate data science and computational skills with domain expertise in epidemiology and public health. It may also require establishing new funding mechanisms and support structures to enable researchers to access the computational resources and expertise necessary to apply ML and AI techniques to their data. Building a skilled and diverse workforce that can effectively leverage these technologies will be critical for driving innovation and progress in life course research and public health practice.

By pursuing these recommendations and prioritizing interdisciplinary collaboration, standardization, integration, equity, and capacity building, the field of life course epidemiology can harness the full potential of ML and AI to advance our understanding of health and disease across the lifespan and develop more effective, equitable, and evidence-based public health solutions.

Conclusions

The integration of ML and AI in life-course epidemiology presents a remarkable opportunity to advance our understanding of the complex interplay between biological, social, and environmental factors that shape health trajectories across the lifespan. Leveraging these powerful technologies to analyze diverse datasets can yield novel insights, improve disease risk prediction, and inform the development of targeted interventions.

However, the realization of this potential is contingent upon addressing the significant challenges associated with the use of ML and AI, including issues related to data quality, model interpretability, bias, privacy, and equity. To fully harness the power of these technologies, it is imperative to foster interdisciplinary collaborations, establish standardized guidelines and best practices, advocate for the integration of ML and AI into public health decision-making processes, prioritize fairness in their application, and invest in training and capacity building.

It is important to acknowledge that this perspective paper, while striving for a balanced and comprehensive discussion, is not a systematic review. As such, the information presented may be subject to selection bias. Our narrative approach, while broad, may not capture all relevant studies or viewpoints. The examples cited were selected based on relevance and impact, but may not represent an exhaustive body of evidence. Readers should consider this limitation when interpreting the conclusions and recommendations presented.

As we look to the future, we should be guided by a vision of harnessing data, technology, and innovation to promote health, prevent disease, and reduce inequities across the life course. By working collaboratively to responsibly integrate ML and AI into life-course epidemiology, we can take a significant step towards creating a healthier and more equitable future for all.

Availability of data and materials

Not applicable.

Data availability

No datasets were generated or analysed during the current study.

Abbreviations

  • Artificial intelligence

Electronic health record

Large language models

  • Machine learning

Natural language processing

Wagner C, Carmeli C, Jackisch J, Kivimaki M, van der Linden BWA, Cullati S, Chiolero A. Life course epidemiology and public health. Lancet Public Health. 2024;9(4):e261–9.

Article   PubMed   Google Scholar  

Elder Jr GH, Shanahan MJ. The Life Course and Human Development. In: Damon W, Lerner RM, editors. Handbook of Child Psychology. Volume 1, edn. New Jersey: Wiley; 2007.

Google Scholar  

Bi Q, Goodman KE, Kaminsky J, Lessler J. What is machine learning? A primer for the epidemiologist. Am J Epidemiol. 2019;188(12):2222–39.

PubMed   Google Scholar  

Sharma D, Gotlieb N, Farkouh ME, Patel K, Xu W, Bhat M. Machine learning approach to classify cardiovascular disease in patients with nonalcoholic fatty liver disease in the UK Biobank Cohort. J Am Heart Assoc. 2022;11(1):e022576.

Weng SF, Reps J, Kai J, Garibaldi JM, Qureshi N. Can machine-learning improve cardiovascular risk prediction using routine clinical data? PLoS ONE. 2017;12(4):e0174944.

Article   PubMed   PubMed Central   Google Scholar  

Rim TH, Lee CJ, Tham YC, Cheung N, Yu M, Lee G, Kim Y, Ting DSW, Chong CCY, Choi YS, et al. Deep-learning-based cardiovascular risk stratification using coronary artery calcium scores predicted from retinal photographs. Lancet Digit Health. 2021;3(5):e306–16.

Article   PubMed   CAS   Google Scholar  

Ward A, Sarraju A, Chung S, Li J, Harrington R, Heidenreich P, Palaniappan L, Scheinker D, Rodriguez F. Machine learning and atherosclerotic cardiovascular disease risk prediction in a multi-ethnic population. NPJ Digit Med. 2020;3:125.

Tran KA, Kondrashova O, Bradley A, Williams ED, Pearson JV, Waddell N. Deep learning in cancer diagnosis, prognosis and treatment selection. Genome Med. 2021;13(1):152.

Waljee AK, Weinheimer-Haus EM, Abubakar A, Ngugi AK, Siwo GH, Kwakye G, Singal AG, Rao A, Saini SD, Read AJ, et al. Artificial intelligence and machine learning for early detection and diagnosis of colorectal cancer in sub-Saharan Africa. Gut. 2022;71(7):1259–65.

Zhang B, Shi H, Wang H. Machine learning and AI in cancer prognosis, prediction, and treatment selection: a critical approach. J Multidiscip Healthc. 2023;16:1779–91.

Lee EY, Maloney NJ, Cheng K, Bach DQ. Machine learning for precision dermatology: advances, opportunities, and outlook. J Am Acad Dermatol. 2021;84(5):1458–9.

Swanson K, Wu E, Zhang A, Alizadeh AA, Zou J. From patterns to patients: advances in clinical machine learning for cancer diagnosis, prognosis, and treatment. Cell. 2023;186(8):1772–91.

Lu L, Dercle L, Zhao B, Schwartz LH. Deep learning for the prediction of early on-treatment response in metastatic colorectal cancer from serial medical imaging. Nat Commun. 2021;12(1):6654.

Article   PubMed   PubMed Central   CAS   Google Scholar  

Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, Thrun S. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):115–8.

Grueso S, Viejo-Sobera R. Machine learning methods for predicting progression from mild cognitive impairment to Alzheimer’s disease dementia: a systematic review. Alzheimers Res Ther. 2021;13(1):162.

Tang AS, Rankin KP, Cerono G, Miramontes S, Mills H, Roger J, Zeng B, Nelson C, Soman K, Woldemariam S, et al. Leveraging electronic health records and knowledge networks for Alzheimer’s disease prediction and sex-specific biological insights. Nat Aging. 2024;4(3):379–95.

Gao XR, Chiariglione M, Qin K, Nuytemans K, Scharre DW, Li YJ, Martin ER. Explainable machine learning aggregates polygenic risk scores and electronic health records for Alzheimer’s disease prediction. Sci Rep. 2023;13(1):450.

Bhagwat N, Viviano JD, Voineskos AN, Chakravarty MM, Alzheimer’s Disease Neuroimaging I. Modeling and prediction of clinical symptom trajectories in Alzheimer’s disease using longitudinal data. PLoS Comput Biol. 2018;14(9):e1006376.

Dogan O, Tiwari S, Jabbar MA, Guggari S. A systematic review on AI/ML approaches against COVID-19 outbreak. Complex Intell Syst. 2021;7(5):2655–78.

Article   Google Scholar  

Bengtsson L, Gaudart J, Lu X, Moore S, Wetter E, Sallah K, Rebaudet S, Piarroux R. Using mobile phone data to predict the spatial spread of cholera. Sci Rep. 2015;5:8923.

Subramaniam S, Raju N, Ganesan A, Rajavel N, Chenniappan M, Prakash C, Pramanik A, Basak AK, Dixit S. Artificial intelligence technologies for forecasting air pollution and human health: a narrative review. Sustainability. 2022;14:9951.

Article   CAS   Google Scholar  

Odlum M, Yoon S. What can we learn about the Ebola outbreak from tweets? Am J Infect Control. 2015;43(6):563–71.

Guevara M, Chen S, Thomas S, Chaunzwa TL, Franco I, Kann BH, Moningi S, Qian JM, Goldstein M, Harper S, et al. Large language models to identify social determinants of health in electronic health records. NPJ Digit Med. 2024;7(1):6.

Patra BG, Sharma MM, Vekaria V, Adekkanattu P, Patterson OV, Glicksberg B, Lepow LA, Ryu E, Biernacka JM, Furmanchuk A, et al. Extracting social determinants of health from electronic health records using natural language processing: a systematic review. J Am Med Inform Assoc. 2021;28(12):2716–27.

Hinson JS, Klein E, Smith A, Toerper M, Dungarani T, Hager D, Hill P, Kelen G, Niforatos JD, Stephens RS, et al. Multisite implementation of a workflow-integrated machine learning system to optimize COVID-19 hospital admission decisions. NPJ Digital Medicine. 2022;5(1):94.

Hamilton AJ, Strauss AT, Martinez DA, Hinson JS, Levin S, Lin G, Klein EY. Machine learning and artificial intelligence: applications in healthcare epidemiology. Antimicrob Steward Healthc Epidemiol. 2021;1(1):e28.

Ye C, Fu T, Hao S, Zhang Y, Wang O, Jin B, Xia M, Liu M, Zhou X, Wu Q, et al. Prediction of incident hypertension within the next year: prospective study using statewide electronic health records and machine learning. J Med Internet Res. 2018;20(1):e22.

Kang Q, Song X, Xin X, Chen B, Chen Y, Ye X, Zhang B. Machine learning-aided causal inference framework for environmental data analysis: a COVID-19 case study. Environ Sci Technol. 2021;55(19):13400–10.

PubMed   CAS   Google Scholar  

Chu J, Dong W, Wang J, He K, Huang Z. Treatment effect prediction with adversarial deep learning using electronic health records. BMC Med Inform Decis Mak. 2020;20(Suppl 4):139.

Zhu Y, Li C, Xie W, Zhong B, Wu Y, Blumenthal JA. Trajectories of depressive symptoms and subsequent cognitive decline in older adults: a pooled analysis of two longitudinal cohorts. Age Ageing. 2022;51(1):afab191.

Wassink-Vossen S, Collard RM, Wardenaar KJ, Verhaak PFM, Rhebergen D, Naarding P, Voshaar RCO. Trajectories and determinants of functional limitations in late-life depression: a 2-year prospective cohort study. Eur Psychiatry. 2019;62:90–6.

Chen RJ, Lu MY, Williamson DFK, Chen TY, Lipkova J, Noor Z, Shaban M, Shady M, Williams M, Joo B, Mahmood F. Pan-cancer integrative histology-genomic analysis via multimodal deep learning. Cancer Cell. 2022;40(8):865–878 e866.

Davies NM, Holmes MV, Davey Smith G. Reading Mendelian randomisation studies: a guide, glossary, and checklist for clinicians. BMJ. 2018;362:k601.

Hunter E, Mac Namee B, Kelleher J. An open-data-driven agent-based model to simulate infectious disease outbreaks. PLoS ONE. 2018;13(12):e0208775.

Ferri-Garcia R, Rueda MDM. Propensity score adjustment using machine learning classification algorithms to control selection bias in online surveys. PLoS ONE. 2020;15(4):e0231500.

Gandhi A, Hosanagar K, Singh A. Machine learning instrument variables for causal inference. In: EC’20: Proceedings of the 21st ACM Conference on Economics and Computation: 2019. 2019.

Jawadekar N, Kezios K, Odden MC, Stingone JA, Calonico S, Rudolph K, Zeki Al Hazzouri A. Practical guide to honest causal forests for identifying heterogeneous treatment effects. Am J Epidemiol. 2023;192(7):1155–65.

Cui Y, Kosorok MR, Sverdrup E, Wager S, Zhu R. Estimating heterogeneous treatment effects with right-censored data via causal survival forests. J R Stat Soc Ser B Stat Methodol. 2023;85(2):179–211.

Credit K, Lehnert M. A structured comparison of causal machine learning methods to assess heterogeneous treatment effects in spatial data. In: Journal of Geographical Systems. 2023.

Lundberg SM, Erion G, Chen H, DeGrave A, Prutkin JM, Nair B, Katz R, Himmelfarb J, Bansal N, Lee SI. From local explanations to global understanding with explainable AI for trees. Nat Mach Intell. 2020;2(1):56–67.

Huang C, Petukhina A. Modern Machine Learning Methods for Time Series Analysis. In: Applied Time Series Analysis and Forecasting with Python. Switzerland: Springer; 2022. p. 341–61.

Mao S, Sejdic E. A review of recurrent neural network-based methods in computational physiology. IEEE Trans Neural Netw Learn Syst. 2023;34(10):6983–7003.

Wu Z, Tian Y, Li M, Wang B, Quan Y, Liu J. Prediction of air pollutant concentrations based on the long short-term memory neural network. J Hazard Mater. 2024;465:133099.

Liu X, Zhang X, Wang R, Liu Y, Hadiatullah H, Xu Y, Wang T, Bendl J, Adam T, Schnelle-Kreis J, Querol X. High-precision microscale particulate matter prediction in diverse environments using a long short-term memory neural network and street view imagery. Environ Sci Technol. 2024;58(8):3869–82.

Homburg M, Meijer E, Berends M, Kupers T, Olde Hartman T, Muris J, de Schepper E, Velek P, Kuiper J, Berger M, Peters L. A natural language processing model for COVID-19 detection based on dutch general practice electronic health records by using bidirectional encoder representations from transformers: development and validation study. J Med Internet Res. 2023;25:e49944.

Stojanov R, Popovski G, Cenikj G, Korousic Seljak B, Eftimov T. A Fine-tuned bidirectional encoder representations from transformers model for food named-entity recognition: algorithm development and validation. J Med Internet Res. 2021;23(8):e28229.

Smit LC, Dikken J, Schuurmans MJ, de Wit NJ, Bleijenberg N. Value of social network analysis for developing and evaluating complex healthcare interventions: a scoping review. BMJ Open. 2020;10(11):e039681.

Williams CYK, Zack T, Miao BY, Sushil M, Wang M, Kornblith AE, Butte AJ. Use of a large language model to assess clinical acuity of adults in the emergency department. JAMA Netw Open. 2024;7(5):e248895.

McCrary MR, Galambus J, Chen WS. Evaluating the diagnostic performance of a large language model-powered chatbot for providing immunohistochemistry recommendations in dermatopathology. J Cutan Pathol. 2024;51:689–95.

Kim S, Kim K, Wonjeong Jo C. Accuracy of a large language model in distinguishing anti- and pro-vaccination messages on social media: the case of human papillomavirus vaccination. Prev Med Rep. 2024;42:102723.

Glicksberg BS, Timsina P, Patel D, Sawant A, Vaid A, Raut G, Charney AW, Apakama D, Carr BG, Freeman R, et al. Evaluating the accuracy of a state-of-the-art large language model for prediction of admissions from the emergency room. J Am Med Inform Assoc. 2024;31:1921–8.

Park YJ, Pillai A, Deng J, Guo E, Gupta M, Paget M, Naugler C. Assessing the research landscape and clinical utility of large language models: a scoping review. BMC Med Inform Decis Mak. 2024;24(1):72.

Wiens J, Shenoy ES. Machine learning for healthcare: on the verge of a major shift in healthcare epidemiology. Clin Infect Dis. 2018;66(1):149–53.

London AJ. Artificial intelligence in medicine: overcoming or recapitulating structural challenges to improving patient care? Cell Rep Med. 2022;3(5):100622.

Ahmad OF, Stoyanov D, Lovat LB. Barriers and pitfalls for artificial intelligence in gastroenterology: ethical and regulatory issues. Tech Innov Gastrointest Endosc. 2020;22(2):80–4.

Su C, Xu Z, Pathak J, Wang F. Deep learning in mental health outcome research: a scoping review. Transl Psychiatry. 2020;10(1):116.

Eisbach S, Mai O, Hertel G. Combining theoretical modelling and machine learning approaches: the case of teamwork effects on individual effort expenditure. New Ideas Psychol. 2024;73:101077.

Rajkomar A, Hardt M, Howell MD, Corrado G, Chin MH. Ensuring fairness in machine learning to advance health equity. Ann Intern Med. 2018;169(12):866–72.

Murdoch WJ, Singh C, Kumbier K, Abbasi-Asl R, Yu B. Definitions, methods, and applications in interpretable machine learning. Proc Natl Acad Sci U S A. 2019;116(44):22071–80.

Littmann M, Selig K, Cohen-Lavi L, Frank Y, Hönigschmid P, Kataka E, Mösch A, Qian K, Ron A, Schmid S, et al. Validity of machine learning in biology and medicine increased through collaborations across fields of expertise. Nat Mach Intell. 2020;2(1):18–24.

Char DS, Abramoff MD, Feudtner C. Identifying ethical considerations for machine learning healthcare applications. Am J Bioeth. 2020;20(11):7–17.

Rajpurkar P, Chen E, Banerjee O, Topol EJ. AI in health and medicine. Nat Med. 2022;28(1):31–8.

Chen RJ, Lu MY, Chen TY, Williamson DFK, Mahmood F. Synthetic data in machine learning for medicine and healthcare. Nat Biomed Eng. 2021;5(6):493–7.

Volovici V, Syn NL, Ercole A, Zhao JJ, Liu N. Steps to avoid overuse and misuse of machine learning in clinical research. Nat Med. 2022;28(10):1996–9.

Christodoulou E, Ma J, Collins GS, Steyerberg EW, Verbakel JY, Van Calster B. A systematic review shows no performance benefit of machine learning over logistic regression for clinical prediction models. J Clin Epidemiol. 2019;110:12–22.

Salganik MJ, Lundberg I, Kindel AT, Ahearn CE, Al-Ghoneim K, Almaatouq A, Altschul DM, Brand JE, Carnegie NB, Compton RJ, et al. Measuring the predictability of life outcomes with a scientific mass collaboration. Proc Natl Acad Sci U S A. 2020;117(15):8398–403.

Buchholz O, Grote T. Predicting and explaining with machine learning models: social science as a touchstone. Stud Hist Philos Sci. 2023;102:60–9.

Zhou Q, Chen ZH, Cao YH, Peng S. Clinical impact and quality of randomized controlled trials involving interventions evaluating artificial intelligence prediction tools: a systematic review. NPJ Digit Med. 2021;4(1):154.

Nestor B, McDermott MB, Boag WW, Berner G, Naumann T, Hughes MC, et al. Feature robustness in non-stationary health records: caveats to deployable model performance in common clinical machine learning tasks. In: Machine Learning for Healthcare Conference: 2019. University of Michigan: PMLR; 2019. p. 381–405.

Finlayson SG, Subbaswamy A, Singh K, Bowers J, Kupke A, Zittrain J, Kohane IS, Saria S. The clinician and dataset shift in artificial intelligence. N Engl J Med. 2021;385(3):283–6.

Fletcher RR, Nakeshimana A, Olubeko O. Addressing fairness, bias, and appropriate use of artificial intelligence and machine learning in global health. Front Artif Intell. 2020;3:561802.

Download references

Acknowledgements

Open access funding provided by Karolinska Institute. SC’s research was supported by the PENDA, funded by the UK Foreign, Commonwealth and Development Office.

Author information

Shanquan Chen and Jiazhou Yu contributed equally to this work.

Authors and Affiliations

Faculty of Epidemiology and Population Health, London School of Hygiene & Tropical Medicine, Keppel Street, London, WC1E 7HT, UK

Shanquan Chen & Sarah Chamouni

Department of Medicine and Therapeutics, The Chinese University of Hong Kong, Hong Kong SAR, China

Department of Computer Science, University College London, London, WC1E 6BT, UK

Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, 171 64, Sweden

You can also search for this author in PubMed   Google Scholar

Contributions

SC1: conceptualization, writing-original draft. JY: writing-reviewing and editing. SC2: writing-reviewing and editing. YW: writing-reviewing and editing. YL: writing-reviewing and editing. All authors read and approved the final manuscript.

Corresponding authors

Correspondence to Shanquan Chen or Yunfei Li .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Chen, S., Yu, J., Chamouni, S. et al. Integrating machine learning and artificial intelligence in life-course epidemiology: pathways to innovative public health solutions. BMC Med 22 , 354 (2024). https://doi.org/10.1186/s12916-024-03566-x

Download citation

Received : 22 May 2024

Accepted : 19 August 2024

Published : 02 September 2024

DOI : https://doi.org/10.1186/s12916-024-03566-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Life course
  • Epidemiology
  • Public health

BMC Medicine

ISSN: 1741-7015

artificial intelligence in mental health care research paper

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Croat Med J
  • v.61(3); 2020 Jun

Artificial intelligence in prediction of mental health disorders induced by the COVID-19 pandemic among health care workers

Krešimir Ćosić.

1 Laboratory for Interactive Simulation Systems, Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia [email protected]

Siniša Popović

Marko Šarlija, ivan kesedžić, tanja jovanovic.

2 Department of Psychiatry and Behavioral Neurosciences, Wayne State University, Detroit, MI, United States of America

The coronavirus disease 2019 (COVID-19) pandemic and its immediate aftermath present a serious threat to the mental health of health care workers (HCWs), who may develop elevated rates of anxiety, depression, posttraumatic stress disorder, or even suicidal behaviors. Therefore, the aim of this article is to address the problem of prevention of HCWs’ mental health disorders by early prediction of individuals at a higher risk of later chronic mental health disorders due to high distress during the COVID-19 pandemic. The article proposes a methodology for prediction of mental health disorders induced by the pandemic, which includes: Phase 1) objective assessment of the intensity of HCWs’ stressor exposure, based on information retrieved from hospital archives and clinical records; Phase 2) subjective self-report assessment of stress during the COVID-19 pandemic experienced by HCWs and their relevant psychological traits; Phase 3) design and development of appropriate multimodal stimulation paradigms to optimally elicit specific neuro-physiological reactions; Phase 4) objective measurement and computation of relevant neuro-physiological predictor features based on HCWs’ reactions; and Phase 5) statistical and machine learning analysis of highly heterogeneous data sets obtained in previous phases. The proposed methodology aims to expand traditionally used subjective self-report predictors of mental health disorders with more objective metrics, which is aligned with the recent literature related to predictive modeling based on artificial intelligence. This approach is generally applicable to all those exposed to high levels of stress during the COVID-19 pandemic and might assist mental health practitioners to make diagnoses more quickly and accurately.

The coronavirus disease 2019 (COVID-19) pandemic and its immediate aftermath present a serious threat to the mental health of health care workers (HCWs), who may develop elevated rates of anxiety, depression, posttraumatic stress disorder (PTSD), or even suicidal behaviors ( 1 ). Recent research related to the COVID-19 pandemic ( 2 , 3 ) and 2015 Middle East respiratory syndrome (MERS) outbreak ( 4 ) recognizes that HCWs are at high risk for mental illness. Therefore, urgent monitoring of their mental health is needed, particularly early prediction and proper treatments of nurses and physicians who were exposed to a high level of distress by working directly with ill or quarantined persons ( 5 ). Mental health risks of highly distressed individuals are further increased when they exhibit low overall stress resilience and have other vulnerability factors, such as the general propensity to psychological distress ( 6 ) and low self-control ( 7 ). Recognition and identification of such individuals in early stages of acute stress is extremely important in order to prevent the development of more serious long-term mental health disorders, such as PTSD, depression, and suicidal behavior. However, mental disorders are difficult to diagnose, and even more difficult to predict due to the current lack of biomarkers ( 8 ) and humans’ subjectivity, as well as unique personalized characteristics of illness that may not be observable by mental health practitioners. Currently, the diagnosis of mental health disorders is mainly based on the symptoms categorized according to the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) ( 9 ).

In such circumstances, one of the greatest impacts of digital psychiatry, particularly applied artificial intelligence (AI) and machine learning (ML) ( 10 - 15 ) during the ongoing COVID-19 pandemic, is their ability of early detection and prediction of HCWs’ mental health deterioration, which can lead to chronic mental health disorders. Furthermore, AI-based psychiatry may help mental health practitioners redefine mental illnesses more objectively than is currently done by DSM-5 ( 14 ). Regardless of the specific application, ie, prediction, prevention, or diagnosis, AI-based technologies in psychiatry rely on the identification of specific patterns within highly heterogeneous multimodal sets of data ( 13 ). These big data sets may include various psychometric scales or mood rating scales, brain imaging data, genomics, blood biomarkers, data based on novel monitoring systems (eg, smartphones), data scraped from social media platforms ( 16 ), speech and language data, facial data, dynamics of the oculometric system, attention assessment based on eye-gaze data, as well as various features based on the analysis of peripheral physiological signals ( 8 , 17 ), eg, respiratory sinus arrhythmia, startle reactivity etc. Such AI systems based on multimodal neuro-psycho-physiological features can detect mental health disorders early enough to prevent and reduce the emergence of severe mental illnesses and improve the overall mental health. Therefore, AI has the transformational power to change a subjective diagnostic system in psychiatry to a more objective medical discipline. Also, a new generation of AI in psychiatry might act as a self-explanatory digital assistant to psychiatrists. Definitely, psychiatry today could benefit from AI’s ability to analyze data and recognize patterns and hidden warning signs that a psychotherapist might miss. Such timely information enables making diagnoses more quickly and accurately, and might be lifesaving particularly for all of those HCWs who might have suicidal ideation ( 18 , 19 ) due to heavy mental distress during the COVID-19 pandemic.

Hence, the aim of this article is to address the problem of prevention of HCWs’ mental health disorders by early prediction of individuals who may have a higher risk of later chronic mental health disorders due to high distress during the COVID-19 pandemic. In order to reach this aim and enhance traditional subjective diagnostics and risk assessment approaches, the methodology proposed in this article is based on our extensive experimental research on the selection of resilient candidates for special forces during Survival, Evasion, Resistance and Escape (S.E.R.E.) training in collaboration with Emory University School of Medicine, Atlanta, United States, and Hadassah Hebrew University Hospital, Jerusalem, Israel ( 20 ). Similar methodology has been applied in our project related to the selection of resilient candidates for air traffic controllers in cooperation with Harvard Medical School & Massachusetts General Hospital and Croatia Air Traffic Control ( 17 , 21 ). These multi-year experimental research projects are based on a variety of questionnaires and experimental measurements, which include a set of comprehensive multimodal stimuli, corresponding multimodal neuro-physiological, oculometric and acoustic/speech responses, and complex feature computation. Therefore, we do believe that future clinical research based on the proposed multimodal neuro-psycho-physiological features and AI analysis can detect mental health disorders early enough to prevent and reduce the emergence of severe mental illnesses. Such reliable predictors of potential mental health disorders among HCWs due to COVID-19 stressors will be crucial for the mental health of HCWs and maintaining high efficiency and productivity of medical institutions globally.

Proposed methodology

The proposed methodology, described in Figure 1 and in the following 5 phases, includes objective assessment of intensity of HCWs’ stressor exposure during the COVID-19 pandemic described in Phase 1, subjective assessment of stress experienced by HCWs during the COVID-19 pandemic based on the specific psychological questionnaire described in Phase 2, distinctive stimulation paradigms designed and developed within Phase 3, computed neuro-physiological features based on stimulation responses in Phase 4, as well as statistical and ML data analysis described in Phase 5.

An external file that holds a picture, illustration, etc.
Object name is CroatMedJ_61_0279-F1.jpg

The proposed methodology for prediction of mental health disorders. The illustration was partially assembled from public domain/free sources on Wikipedia and Wikimedia Commons.

Phase 1: Objective stress assessment

Objective assessment of intensity of HCWs’ stressor exposure during the COVID-19 pandemic is based on acquiring information from official hospital archives and clinical records regarding their daily schedules during the COVID-19 pandemic, overtime work, the level of threat they experienced, sick leave, etc. These objective metrics of exposure to stressors are proposed based on analysis and adaptation of different questionnaires that have been used for assessment of stressors in military combat deployment and operation ( 22 - 24 ), as well as stressors in virus outbreaks ( 25 - 28 ). The key aim of this phase is to objectively stratify individual HCWs according to the objective level of stress to which they were exposed during their clinical service, using the information provided by authorized clinical sources rather than by asking individuals to self-report themselves.

Phase 2: Subjective stress assessment

Subjective assessment of stress experienced by HCWs during their COVID-19 pandemic clinical service is based on the questionnaire that is developed by a selection of the most appropriate items from general-purpose psychological questionnaires used for early recognition of distress, mental health disorder screening, and stress resilience (eg, 29 - 38 ), as well as from specific COVID-19 psychological questionnaires ( 25 - 28 , 39 ). Self-reported subjective peritraumatic reactions represent a valuable complement to objective dimensions of stressful situations collected in Phase 1 when trying to predict chronic mental health disorders, such as PTSD ( 40 ). Accordingly, subjective self-reports of individual COVID-19 stress intensity and relevant personality traits will also be used as one of the indicators of potential chronic mental health disorders in comparison with more objective metrics developed in Phase 1.

Phase 3: Selection of multimodal stimulation

This phase is related to the design and development of appropriate multimodal stimulation paradigms in order to optimally elicit specific neuro-psycho-physiological individual reactions among HCW participants ( Figure 2 ). Accordingly, the appropriate input-output multimodal experimental stimulation paradigms that elicit the specific multimodal features reflecting the impact of stress on the patients’ neuro-psycho-physiological state ( 21 ) are usually related to baseline neuro-physiological functioning; well-established generic stressful emotional stimuli, such as different versions of acoustic startle stimuli and airblasts; startle modulation paradigms, such as fear-potentiated and anxiety-potentiated startle ( 41 ), and prepulse inhibition; aversive images and sounds semantically related to COVID-19 clinical environments; a variety of cognitive tasks, eg, different versions of Stroop tests ( 42 , 43 ), memory tasks ( 44 ), arithmetic tasks ( 45 , 46 ), or verbal fluency tasks ( 47 , 48 ). Developed multimodal stimulation paradigms are administered to the HCWs in a controlled clinical laboratory setting in order to acquire their multimodal neuro-physiological reactions. Acoustic startle stimuli are usually 50 ms immediate rise time broadband noise bursts, with intensity ranging from 95-110 dB[SPL] ( 49 ), and are delivered binaurally through headphones. In order to induce laboratory fear, threat, or anxiety by means of predictable and unpredictable aversive events delivery ( 50 ), other aversive stimuli can be used, eg, combinations of airblasts to the neck, aversive images on the screen and sounds ( 51 ), as well as annoying but not painful electric shocks, eg, 1.5-2.5 mA, 5-ms duration. Existing semantically and emotionally annotated stimuli databases can facilitate efficient and accurate search for optimal aversive audio-visual stimuli to include in the multimodal stimulation paradigms ( 52 , 53 ). Cognitive tasks are usually administered through specifically designed programs that allow response duration and accuracy measurement.

An external file that holds a picture, illustration, etc.
Object name is CroatMedJ_61_0279-F2.jpg

Design and development of multimodal stimulation paradigms for optimal elicitation of specific neuro-psycho-physiological individual reactions; adapted from ( 21 ). HCW – health care workers; fNIRS – functional near-infrared spectroscopy; EEG – electroencephalography; ECG – electrocardiography, EMG –electromyography; EDA – electrodermal activity. The illustration was partially assembled from public domain/free sources: https://publicdomainvectors.org , http://www.stockunlimited.com , https://commons.wikimedia.org .

Phase 4: Multimodal data acquisition and feature computation

This phase ( Figure 3 ) is related to the acquisition of multimodal neuro-physiological reactions on stimulation paradigms proposed in the previous phase and computation of corresponding features relevant for prediction of mental health disorders. The proposed methodology is based on state-of-the-art sensors for measurements of the individual’s multimodal neuro-psycho-physiological reactions: functional near-infrared spectroscopy (fNIRS); electroencephalography (EEG); peripheral physiology, ie, electrocardiography (ECG), electromyography (EMG), electrodermal activity (EDA), respiration; speech/acoustic and linguistic reactions; and facial/gesture and oculomotor reactions ( 54 , 55 ). Such measurements, obtained as a response to relevant stimuli described in Phase 3, have the potential to objectivize traditional diagnostic methodology in psychiatry. In our laboratory, the Biopac MP150 system (BIOPAC Systems Inc., Goleta, CA, USA) is used for the acquisition of the neuro-physiological signals. A Gazepoint GP3 HD eye-tracker (Gazepoint, Vancouver, Canada) is used for detection of spontaneous blinks, tracking of changes in pupil dilation, and gaze tracking. A microphone and a webcam are used for collecting speech and gesture data, while the fNIRS Biopac Model 1100 Imager together with the COBI Studio Software (BIOPAC Systems Inc.) is used for brain activation measurements.

An external file that holds a picture, illustration, etc.
Object name is CroatMedJ_61_0279-F3.jpg

Multimodal data acquisition and feature computation. Illustrated is a subset of features: HR mean – mean heart rate; HR recovery – heart rate recovery; RSA – respiratory sinus arrhythmia; RMSSD – root mean square of successive differences; EDA AS – EDA-based startle response measure; EMG AS – EMG-based startle response measure; F0 voice – voice fundamental frequency; RMS voice – voice energy – root mean square; F1-4 – voice formants; ZCR – voice zero-crossing rate; PD – pupil dilation; SPV – saccadic peak velocity; fNIRS HbO – oxygenated hemoglobin.

After pre-processing of the neuro-physiological signals, ie, obtained inter-beat interval time-series based on the detected QRS complexes in the ECG signal, preprocessed respiratory and EDA data, accordingly filtered EMG data for eyeblink startle response assessment, an array of relevant multimodal features is computed ( 17 , 21 ). These features are elicited and computed according to the relevant research findings related to their associations with specific positive or negative mental health disorder predictors or outcomes, such as stress resilience/vulnerability and other personality traits, distress, anxiety, PTSD, or depression. Therefore, these features are defined and computed in a theory-driven manner. Examples of such features are resting heart rate ( 56 , 57 ) and heart rate variability (HRV) ( 58 , 59 ), respiratory sinus arrhythmia ( 21 , 60 ), HRV-based psychophysiological allostasis ( 21 , 58 ), EMG-based and EDA-based startle reactivity ( 61 ), various features related to speech prosody ( 62 ), prefrontal cortex activation on various cognitive tasks ( 43 , 44 ), and alpha band-related parietal EEG asymmetry ( 63 ). Such integrated multimodal neuro-psycho-physiological prediction of mental health disorders emphasizes the importance of combining different multimodal features in enhancing predictive power of the proposed approach, since any single feature in the assessment and prediction of mental health deterioration is a relatively weak discriminator.

Phase 5: Data analysis for prediction of mental health disorders

Due to potentially large amounts of highly heterogeneous data, Phase 5 is accomplished using cloud storage and cloud computing resources, as shown in Figure 1 . Statistical correlation-based analyses are expected to provide better insight into the neuro-physiological risk markers for the development of chronic stress-related mental health problems affected by the COVID-19 pandemic. Feature selection and classification based on ML, as opposed to statistical methods, would explore more complex interactions between various features in a highly nonlinear manner associated with the inference of risk of HCW individuals for the development of chronic mental health problems. Individuals exhibiting high risk of chronic stress-related mental health problems may urgently need as prevention effective and efficient treatments, using state-of-the-art tools and means of digital psychiatry, such as computerized cognitive behavioral therapy ( 54 ) and telepsychiatry, which are efficiently applicable in the early stages of illness ( 64 ). A more detailed description of the proposed tools and means of statistical and ML analyses is given in the following section.

Statistical and machine learning analysis

A data-driven verification of various multimodal neuro-psycho-physiological features extracted in Phase 4 can be obtained by the application of statistical analyses and ML techniques in relation to the objective stress intensity assessment from Phase 1, as well as subjective self-report indicators of experienced stress and relevant psychological traits from Phase 2. Phase 5 can provide valuable insight into neuro-psycho-physiological risk markers for the development of chronic stress-related mental/physical problems in the context of the COVID-19 pandemic, and increase the translational potential of such features. A similar data-mining-based approach has been previously used in the analysis of diagnostic data for differentiating PTSD patients from participants with psychiatric diagnoses other than PTSD ( 65 ). This work has demonstrated the applicability of ML for the analysis of PTSD, but only based on the data obtained from structured psychiatric interviews and psychiatric scales, which is analogous just to Phase 2 of the methodology proposed in this article.

In terms of statistical analysis, various correlation analysis approaches can be employed. One example of such methodology is the canonical-correlation analysis (CCA), a technique suitable for investigating the relationships between variables coming from distinct sets, eg, the relationship between variables obtained in Phase 1 and Phase 4, or Phase 2 and Phase 4. In doing so, the CCA will provide interpretable linear combinations of variables from different sets that have a maximum correlation. In order to maximize the statistical power of conclusions, ie, to avoid the large statistical corrections due to conducting numerous exploratory tests for significance of correlation coefficients, several particularly well-founded hypotheses should be defined a priori , before the computation of the full correlation matrix. These hypotheses should be those with the most overwhelming evidence from the literature regarding expected pairwise associations between specific objective metrics of the stress intensity exposure, subjective self-report metrics of experienced stress and relevant psychological traits, as well as objectively measured/computed neuro-physiological features. A brief overview of neuro-physiological features with the highest predictive potential according to the research references is given in the description of Phase 4. Additionally, a subset of the obtained data can be used to separate the participants according to specific group memberships, eg, high distress vs low distress. For example, a recent COVID-19-related research paper ( 28 ) uses data analogous to our proposed Phase 1 and Phase 2 to define resilience in the face of exposure to a stressor of a given intensity. However, in that work all data were obtained via self-report, while we propose the integration of objectively assessed stressor severity (Phase 1) and self-report data (Phase 2) with the relevant neuro-physiological features (Phase 3 and Phase 4). Accordingly, various regression analyses or even between-group tests can be conducted.

Regarding the application of ML, both unsupervised and supervised learning approaches should be considered. Unsupervised learning approaches, such as principal component analysis, factor analysis, or cluster analysis, do not require labeled data and can help reveal previously undetected patterns in heterogeneous sets of data, and help in the understanding of the relationships between objective stressor severity, self-report assessments, and neuro-psycho-physiological characterization of the participant. For example, a non-classical unsupervised learning approach, based on a brain-inspired spiking neural network (SNN) model trained using EEG data, has provided novel insights into the brain functioning in depression and the effects of mindfulness training on the brain connectivity ( 66 ). Such novel unsupervised approaches, based on the spike-timing-dependent plasticity learning rules of the SNN connectivity emerging from complex spatio-temporal brain data, like EEG and fNIRS, which are considered in the proposed methodology, could help reveal and understand early patterns of mental health deterioration in HCWs. When considering labeled data, the main aim of supervised ML, as opposed to statistical methods, is the maximization of classification/prediction accuracy, while sacrificing model explainability and rigorous statistical validation. Accordingly, recent work highlights the need to establish an ML framework in psychiatry that nurtures trustworthiness, focusing on explainability, transparency, and generalizability of the obtained models ( 11 ). This approach, regardless of the superior classification/prediction performance, is critical in order for the AI methods to be employed in diagnosis, monitoring, evaluation, and prognosis of mental illness. Supervised learning in the context of the proposed methodology can be formulated both in terms of regression and classification tasks. Neuro-physiological features obtained in Phase 4 can be integrated by a model, eg, support vector machine, random forest, artificial neural network, etc, in the accordingly formulated supervised learning task. For example, data from Phase 4 can be used to model various labels emerging from Phases 1 and 2, such as estimation of objective stressor severity, available from Phase 1; or classification of high vs low distress in HCWs based on the data obtained in Phase 2.

To summarize, technology based on AI and ML can only be as strong as the data the models are trained on, which is particularly important in mental health diagnostics. Currently, for most classification or prediction tasks emerging from the area of mental health, labels are most likely still not quantified well enough to successfully train an algorithm. One possible outcome regarding this labeling issue, as briefly stated in the introductory section, is in data-driven AI technologies helping mental health practitioners re-define mental illnesses more objectively than is currently done in the DSM-5. Additionally, AI can help personalize treatments based on the patient’s unique characteristics. Such unique characteristics are often very subtle and hardly observable by human mental health practitioners. For example, subtle shifts in speech tone or pace can be a sign of mania or depression, and such patterns can now be even more precisely detected by an AI-driven system in comparison to humans. AI can exploit language and speech, among many other available modalities, as one of the critical pathways to detecting patient mental states, especially through mobile devices ( 67 ), which should also be regarded as highly important in the context of prediction of mental health disorders induced by the COVID-19 pandemic.

The proposed methodology for prediction of mental health disorders among HCWs during the ongoing pandemic based on AI-aided data analysis is particularly important since they are a high-risk group for contracting the COVID-19 disease ( 68 ) and developing later stress-related symptoms. However, the methodology proposed in this article might be applied generally for all those who were exposed to higher levels of such risks during the COVID-19 pandemic. The main objective of the proposed methodology is to expand subjective metrics as predictors of potential mental health disorders mainly specific for Phase 2 with more objective metrics derived in Phases 1, 3, and 4. The use of neuro-physiological features is expected to provide additional information and increase reliability when identifying particularly at-high-risk individuals. Such efforts are well aligned with the growing literature regarding the application of AI methods in prediction of chronic mental health disorders, which has been initially focused mainly on self-report predictor variables ( 65 , 69 , 70 ) but has been subsequently extended to speech features ( 62 ) and various biomarkers ( 57 , 71 , 72 ). These efforts should help mental health practitioners make their diagnostics more objectively than currently done in the DSM-5. Acquiring more reliable neuro-psycho-physiological predictors based on objective metrics assessment in early identification of the vulnerable individuals is an important step forward in the prevention of mental health disorders caused by the COVID-19 pandemic. Early identification of mental health disorders based on the proposed methodology as well as early warning indicators and risk factors are prerequisites for on-time prediction and prevention of mental health disorders of the global population, helping clinicians make diagnoses more quickly and accurately, and rapidly providing optimal treatment for patients.

Society-related Fears and Personal Mental Health

  • Original Research
  • Open access
  • Published: 31 August 2024

Cite this article

You have full access to this open access article

artificial intelligence in mental health care research paper

  • Michael Mutz   ORCID: orcid.org/0000-0002-0549-0462 1  

This paper explores the relationship between society-related fears and personal mental health. Respondents of an online survey representing the German population (18 + years) answered how much they are worried about eight societal developments (armed conflicts, social inequality, rise of right-wing extremism, crime and terror, immigration, climate change, artificial intelligence, pandemics). The analysis demonstrate that the sum score of society-related fears is significantly associated with higher levels of anxiety and depression. Particularly concerns about poverty, digitalization and pandemics are associated with higher anxiety and depression scores. Further explorations show that specific fears are intermingled with political ideologies, i.e. people fear different societal developments according to their ideological standpoints. Politically left-leaning individuals regard climate change and rising right-wing extremism as more threatening, while politically right-leaning individuals’ fears relate more strongly to migrants, terror and crime. The fears with the largest negative effect on mental health are poverty and armed conflicts for individuals who identify as left and digitalization for individuals who identify as right. Overall, findings lend support to the general notion that the world’s current ‘polycrisis’ is highly relevant and generally detrimental for mental health and human wellbeing.

Similar content being viewed by others

artificial intelligence in mental health care research paper

Is Subjective Ill-Being Related to Islamophobia in Germany? In Search for Moderators

artificial intelligence in mental health care research paper

Psychological well-being in Europe after the outbreak of war in Ukraine

The global challenge of jihadist terrorism: a quality-of-life model, explore related subjects.

  • Artificial Intelligence
  • Medical Ethics

Avoid common mistakes on your manuscript.

Introduction

A myriad of crisis scenarios related to economic, financial, humanitarian, social, political or environmental problems are occupying public discourse. Whether war or global warming, each crisis has the potential to induce fear. Crises are per definition events of great difficulty and danger that potentially can disrupt society and harm the wellbeing of many people (Walby, 2015 ). As such, crises systematically produce a moment of ambiguity and uncertainty, pointing to a future that is open at best, and endangered at worst (Steg, 2019 ). Some authors have even argued that contemporary society is characterized by “multiple crises” (Brand, 2009 ) or “polycrisis” (Lawrence et al., 2024 ). These terms refer to a condition where crisis of amplifying severity follow each other at an accelerating pace, thus becoming a kind of permanent condition.

In fact, many of society’s crisis pose a real threat to people and their quality of life: For instance, the climate crisis destroys livelihoods and biodiversity in many regions of the planet, leading to global warming and extreme weather events (Abbasi et al., 2023 ). Rising levels of economic inequality and further social frictions between rich and poor are associated with anxiety, stress and poor health (Pickett & Wilkinson, 2015 ). Digitalization and artificial intelligence (AI) have the potential to disrupt employment: Forecasts suggest that 47% of current jobs are “at high risk” of being replaced by technology within the next 20 years (Frey & Osborne, 2017 ). It is reasonable to assume that these developments could trigger fears and thus produce a public mood of “an extraordinarily uncertain and threatening future” (Borisch, 2023 : 332).

In several theoretical accounts it has been argued that fear has become the basic, underlying tone of contemporary society (Bauman, 2006 ; Bude, 2014 ; Furedi, 1997 ). In Bauman’s terms ( 2006 ), it is a vague and undefined “liquid fear” that is rather a kind of background feeling of permanent uncertainty in a society in which reliable certainties erode. According to him, the persistence of fear relates to the speed of change in late modernity and the associated loss of fixed points of reference. Furedi ( 1997 ) states that a “culture of fear” is grounded in a ubiquitous perception of the world as a dangerous place. He understands this as a collective disposition, i.e. as a very basic and underlying sentiment that attaches itself to and shapes concrete human experiences.

Bude ( 2014 ) claims that current society is no longer held together by the promise of social advancement (as in previous decades), but rather by the threat of social exclusion. He also argues that a fundamental insecurity exists, namely that younger generations cannot take it for granted that the future will be better than the present and the past. This perception creates a fear of social decline that reaches deep into the middle class and helps to create constantly self-optimizing personalities who are driven by the subliminal fear of falling out of the middle of society.

Some authors emphasize the social constructionist nature of fear (Furedi, 1997 ; Tudor, 2003 ) and stress that contemporary culture has a tendency to foreground risk. The divergence between objective risk and subjective feelings of threat can be best illustrated in the field of crime and terrorism, where studies show that people’s fear is substantially shaped by media consumption (Romer et al., 2003 ; Williamson et al., 2019 ). Hence, fear is often not based on experience, but rather on risk communication (Guzelian, 2004 ). Particularly the news media are a key factor in the promotion and amplification of a “discourse of fear” (Altheide, 2002 ). However, the public discourse must not necessarily amplify fear, but could potentially also calm fear, highlighting the plasticity of society-related fears (Heins, 2021 ).

As all societal crises have serious and negative implications for a large number of people, it is not surprising that public opinion polls from Germany show that a majority of Germans reports that crises, such as the climate or migration crisis, cause them worry (Infratest dimap, 2024 ; Ipsos, 2024 ). In recent decades, war, crime, and migration have been the issues Germans have been most concerned about, however, with significant fluctuations over time (Lübke, 2019 ). Less clear is the question whether society-related fear is associated with increased risks of personal mental health issues.

One previous German survey study examined the relationship between personal anxiety and fears related to the social and societal environment (Adolph et al., 2016 ). They show, for instance, that higher levels of fear related to political and economic issues, including terrorism or environmental disasters, are associated with severe anxiety symptoms. They argue that an “intensity continuum” exists that stretches from political and economic anxieties over anxieties related to the person’s social life to various forms of clinical anxiety at the personal level (Adolph et al., 2016 ).

That society can be a source of fear is certainly not an entirely new idea. Yet, society-related fears have hardly been systematically included in the discourse on mental health and wellbeing to date. This paper aims to address three key questions that have not yet been adequately answered: (1) What is the proportion of people who are worried or afraid of specific societal crises ? (2) To what extent does fear of societal crises reflect people’s political positions ? (3) Does fear of societal crises as a whole , or any individual societal fear , have an impact on personal mental health ? The present paper explores these questions based on survey data that represent the German population.

Literature Review

Although the topic of society-related fear has received too little attention in the wellbeing and mental health literature, exceptions are studies that more narrowly address one specific crisis or one specific fear. Hence, studies addressing economic recessions, crime fear, the COVID-19 pandemic, or climate anxiety could help summarizing the state of knowledge. In addition, I also summarize few studies that addressed society-related fears more generally with its links to individual mental health or wellbeing outcomes. Finally, I recap literature that examined or reflected on the ideological nature of society-related fears and concerns.

Society-related Fears and Their Consequences for Wellbeing and Mental Health

A scoping review based on 127 quantitative studies conducted in OECD countries summarizes that depression and anxiety levels rise during economic recessions , particularly in those groups with insecure jobs (Guerra & Eboreime, 2021 ). This finding also holds for life satisfaction, which declines in times of economic crisis (Burger et al., 2023 ). In a series of experimental studies participants reported higher levels of fear when they found themselves in an experimental group that had to expect a status decline or downward mobility (Jetten et al., 2021 ). Data from the European Quality of Life Survey also demonstrate that status anxiety is associated with unhappiness (Delhey & Dragolov, 2014 ). Empirical trend analyses from Germany further show that the fear of job loss and social decline increased in the German middle class in periods where the labor market was difficult and the economic outlook rather pessimistic (Lengfeld & Hirschle, 2009 ; Schöneck et al., 2011 ). Closely related to status anxiety are fears about increasing levels of social inequality , which rank among the most frequently mentioned worries reported by Germans (Ipsos, 2024 ).

Regarding fear of crime , studies show that the regional crime rate is positively associated with fear (Bug et al., 2015 ) and negatively associated with wellbeing (Powdthavee, 2005 ). However, the perception of crime is also important and predicts lower life satisfaction even when controlling for victimization experiences (Brenig & Proeger, 2018 ) or real crime rates (Manning et al., 2022 ). An Austrian study further shows that fear of crime relates to underlying social and existential threats, i.e. constituting a generalized syndrome of insecurity (Hirtenlehner, 2006 ). Sometimes, public discourse relates fear of crime to immigration , as there is a widespread perception that immigrants contribute to increased levels of crime (Gurinskaya et al., 2024 ; Hirtenlehner, 2019 ). Academics have also discussed fears related to immigration (Blanc, 2023 ; Bloom, 2015 ). Data from several European countries show that fear of immigration has increased over three decades, in Germany particularly during the so-called “immigration crisis” of 2015/16 (Fraser & Üngör, 2019 ). A “migration panic” is nurtured by economic anxieties, concerns about status decline and perceptions of disorder (Hirtenlehner, 2019 ). However, inasmuch such perceptions translate into personal anxiety has not been thoroughly researched yet.

Germany, as well as many other European countries, has experienced a rise in right-wing extremism (Pisoiu & Ahmed, 2016 ). Many regard the rise of populist and extremist movements and right-wing parties as worrying. Anecdotal evidence from Germany suggests that the 2024 mass demonstrations against right-wing parties were driven in part by anxiety and concern (WDR, 2024 ). A commercial poll from February 2024 suggests that 59% of Germans fear a rise of (right-wing) political extremism (R + V Infocenter, 2024 ). However, there is a lack of scientific research on the relationship between fear of extremism and personal anxiety or wellbeing levels.

In the context of the COVID-19 pandemic fear also played a role. A cross-national analysis of European countries reveals that in stages of the pandemic with higher death rates, life satisfaction dropped (Easterlin & O’Connor, 2023 ). A review shows that COVID-19 related fear was associated with mental health problems, such as anxiety, distress, depression, and insomnia (Şimşir et al., 2022 ). Woman usually reported higher levels of COVID-19 related fear (Metin et al., 2022 ). A survey experiment from Sweden further shows that fear (in the sense of scare) and anxiety (in the sense of worry) were higher in participants who were reminded of the deadliness of the virus and the strained situation in the health care system (Renström & Bäck, 2021 ).

Climate anxiety or eco-anxiety, i.e. the fear concerning the devastating consequences of climate change for life on earth, is another emerging field of research. To date, some studies found associations between climate anxiety and personal mental health issues, such as elevated levels of stress, anxiety and depression (Hajek & König, 2023 ; Heinzel et al., 2023 ; Pihkala, 2020 ; Thomson & Roach, 2023 ; Wullenkord et al., 2021 ). Moreover, climate anxiety negatively correlates with age, indicating that this type of fear – in contrast to most other society-related fears – is particularly threatening for younger age groups (Hajek & König, 2023 ; Heinzel et al., 2023 ).

Research also addressed fears associated with the development and diffusion of digital and AI technologies. These fears may include the possible replacement of humans in a significant proportion of occupations (Frey & Osborne, 2017 ), the lack of human control over emerging “super AI” systems, for instance, in the military domain (Sehrawat, 2017 ) or concerns about privacy violations (Li & Huang, 2020 ). Fear of AI technologies, such as autonomous robots or driving systems, may extend to 18–26%, according to representative U.S. and German surveys (Liang & Lee, 2017 ; Meinlschmidt et al., 2023 ). Studies further show that a negative view and concerns regarding AI technologies correlate with lower life satisfaction at micro and macro social levels (Hinks, 2024 ; Zhao et al., 2024 ).

Finally, a source of fear are wars and armed conflicts . Research shows that the experience of armed conflicts is a trigger for various mental health problems (Charlson et al., 2019 ). Witnessing armed conflicts in closer proximity may also cause concerns, i.e. about a possible escalation of that war. War anxiety is associated with stress and insomnia (Vargová et al., 2024 ). About 50% of Germans reported severe fear of war in a survey carried out in March 2022 (Hajek et al., 2023a ). Moreover, another German study measured a higher anxiety level in the population in the first weeks of the Russian war against Ukraine as during the COVID-19 pandemic (Gottschick et al., 2023 ). Fears either of Germany becoming involved in a war or the outbreak of a nuclear war were both associated with heightened levels of anxiety and depression (Hajek et al., 2023b ).

Apart from studies that have looked at single fears, there are few studies that examined society-related fears in general, i.e., independent of one specific problem or crisis. Using “big data”, a US study on fear in society concludes that fear is on the rise (Kovács, 2023 ). Analyzing approximately 7 million online reviews by applying a semantic coding approach with computational linguistics, it is illustrated that anxiety-related content increased by 20% from 2006 to 2021. Another descriptive account (Ipsos, 2024 ) indicates that issues such as “inflation”, “crime and violence”, “poverty and social inequality” and “climate change” represent the primary fears of the German population, with 24–29% expressing concern about each of these developments.

Regarding the literature on society-related fears, it seem reasonable to assume that (a) a larger proportion of the German population is concerned about societal developments and trends and (b) a link could exist between fears related to the societal level and mental health issues at the personal level.

The Ideological Nature of Society-related Fears

Not everyone shares the same societal fears and concerns. Individual differences, however, do not reflect purely personal characteristics, but can be situated within a larger political or ideological framework. For instance, Nussbaum ( 2018 ) describes that working class Americans are threatened by globalization and digitalization and that particularly right-wing populists can easily capitalize from these fears. For example, fear helps create a desire for a strong leader, mobilize for extreme positions, and scapegoat minorities. Recent accounts elaborate on the mechanisms that link insecurity and migration-related anxiety on the one hand to right-wing ideologies and support on the other, highlighting the role of affective reactions to political issues and the search for stable sources of meaning and identity (Salmela & von Scheve, 2017 ; Yendell & Pickel, 2019 ). Fear of migrant crime and the perception that a cultural or national identity is threatened are textbook examples for topics that right-wing parties usually exploit. For instance, recent opinion polls indicate that 90% of supporters of the far-right party, Alternative für Deutschland , express concern about a perceived decline of German culture and language (Infratest dimap, 2024 ). Some scholars even argue that a heightened sensitivity to perceive uncertainty and change as threatening is at the core of conservative ideologies (Jost et al., 2007 ).

Notwithstanding the link between a culture of fear, conservatism and right-wing populism, the relationship might still be more complex. Nussbaum ( 2018 ) further explains that people, who self-identify as “left”, also fear societal developments. In the US context, they fear the removal of “hard-won rights for women and minorities” or the “collapse of democratic freedoms – of speech, travel, association, press” (Nussbaum, 2018 : p. 2). In Germany, supporters of the Social Democratic Party report higher levels of fear related to wars compared to supporters of center-right and far-right parties (Hajek & König, 2022 ), whereas supporters of the Green Party are most worried about the consequences of climate change (Infratest dimap, 2024 ). It can be conjectured from these findings that there are fears on both sides of the ideological spectrum, but it is other societal developments that are interpreted as most threatening.

Which societal developments trigger fear is likely to depend on an individual’s ideological standpoint. It can be assumed, for example, that the population group that is most concerned about the climate crisis and the population group that is most concerned about incoming refugees are anything, but identical. Rather, each crisis “emotionalizes” and threatens a different population group. If this assumption is correct, then fear should depend on the political and ideological lens through which people look at society. Furthermore, a person’s ideological standpoint could also influence which societal developments or crises translate into personal fears and thus might impair mental health. A more exploratory analysis thus tests whether or not (a) society-related fears are associated with ideological standpoints and (b) whether or not differences exist between people who identify as politically “left”, “right”, and “center” in terms of the effects of particular societal fears on personal mental health.

Study Design and Data Collection

This study used a cross-sectional design, drawing on data from a large-scale representative survey. The survey was integrated into an existing German panel to which access was provided by Forsa, a company specializing in public opinion research. In order to ensure a probability sample that accurately represents the German population, Forsa employs an offline recruitment process for all panelists that utilizes Random Digit Dialing (RDD; Wolter et al., 2009 ). The RDD procedure guarantees that all individuals with a telephone connection, whether mobile or landline, have an equal opportunity to be invited into the panel, thereby ensuring that the panel’s composition mirrors that of the German population. All panelists gave their written consent to be contacted for this study and participated voluntarily. They received information about the present study via Email together with a link to the anonymous online questionnaire. Respondents were able to answer the questionnaire directly on their computers, tablets, or mobile phones and were permitted to terminate the survey at any point and resume at a later time. Data collection was carried out between January 5 and January 13, 2024.

The resulting sample ( N  = 1,001) broadly represents the population living in Germany (≥ 18 years) with access to the Internet. The mean age is 48.4 years (SD = 17.2). The sample includes similar proportions of males (50.4%) and females (49.6%). With regard to education, 22.1% have a lower secondary grade (“Hauptschulabschluss”), 34.3% have a medium secondary grade (“Mittlere Reife”), and 43.6% have a higher secondary grade (“Abitur”). Despite the fact that the raw data reflect the composition of the German population fairly well, still a weighting factor is applied in all analyses that corrects for minor bias in the sample, most notably for a slight underrepresentation of younger age groups and individuals living in East German federal states.

Mental Health Issues

Mental health complaints are measured with the Patient Health Questionnaire for Depression and Anxiety (PHQ-4; Kroenke et al., 2009 ). The PHQ-4, introduced as a brief screening tool for anxiety and depression, has proved its validity and reliability and its briefness makes it particularly useful for large-scale surveys (Adzrago et al., 2024 ; Kroenke et al., 2009 ; Löwe et al., 2010 ). Respondents are asked how often they have been bothered by four symptoms in the past two weeks. Two items refer to anxiety (e.g. “not being able to stop or control worrying”) and two items measure depressive symptoms (e.g. “little interest or pleasure in doing things”). The rating scale provided allows responses from 1=“not at all”, 2=“several days”, 3=“more than half the days” to 4=“nearly every day”. The final scale has a good reliability (Cronbach’s α = 0.88); its mean is M  = 1.68 ( SD  = 0.73; min = 1.00; max = 4.00).

Society-related Fears

A new measure was created to capture society-related fears. Respondents were presented with a list of eight societal developments and crises and then were asked how often they had fears or worries related to the crisis described. The list of potentially worrying societal developments included: a) climate change and its consequences, b) immigration of refugees and asylum seekers, c) poverty and rising levels of social inequality, d) digitalization and artificial intelligence, e) wars and armed conflicts, f) crime and terrorism, g) the rise of right-wing extremist parties and movements, h) pandemics and novel pathogens. Respondents could use a 5-point Likert scale to rate the fear related to each of these developments ranging from 1=“no fear at all” to 5=“very strong fear”. Figure  1 shows descriptive statistics of these eight items. Besides the single items, the analyses also use a mean score of the eight variables ( M  = 3.58; SD  = 0.62; min = 1.00; max = 5.00).

figure 1

Means scores for eight society-related fears. Error bars show 95% confidence interval of the mean. Percentages in brackets indicate the proportion of respondents with “strong” or “very strong” fear. Data represent the German population 18 + years with Internet access ( N  = 1,001)

Political ideology

The Left-Right Self-Placement scale (LRS) measures a basic political stand on a left-right dimension. In Germany, a “right” orientation refers to conservative, market-liberal, and nationalistic attitudes, whereas a “left” orientation favors progressive and egalitarian policies. Scholars consider the left-right pole as the most important and as a rather stable ideological dimension, which relates to voting behavior and is part of most studies that examine political or ideological issues (Klingemann, 1972 ; Knutsen, 1998 ). Participants indicated their ideological orientation on a LRS scale that ranged from 1=“left” to 10=“right”. In the present sample, the mean score of the scale is M  = 5.49 ( SD  = 2.21; min = 1.00; max = 10.00).

Demographic Variables

The regression models include a variety of covariates, given that anxiety and mental health levels vary with socioeconomic and sociodemographic variables. Previous studies refer to increased mental health problems among older and socioeconomically disadvantages groups, females, and immigrants (Adolph et al., 2016 ; Guerra & Eboreime, 2021 ; Metin et al., 2022 ; Walther et al., 2021 ), whereas being in a relationship (Pieh et al., 2020 ) or being religiously affiliated (Hodapp & Zwingmann, 2019 ) could somewhat protect from mental health issues. Therefore, I control for age (in years), gender (1=“female”, 0=“male”), the highest educational degree obtained by the respondent (1=“lower secondary education” to 4=“tertiary education”), the respondent’s personal net income (in 10 income groups from 1= “no income” to 10 “>5.000 €”), relationship status (1=“living with partner”, 0=“single/widowed”), immigrant status (1=“1st/2nd generation immigrants”, 0=“natives”), religious affiliation (1=“any denominational affiliation”, 0=“no denominational affiliation) and residence (1 = East Germany, 0 = West Germany). Controlling for these variables allows for a more accurate estimation of the effect of society-related fears. It needs to be pointed out that the variable for income – despite being measured in 10 categories – is meant to estimate a linear effect, i.e., that a higher income level is associated with fewer mental health problems.

Analytical Approach

The paper first presents mean values with standard errors of the mean for the eight society-related fears, indicating the level of worry the surveyed crises and developments cause among Germans. In addition, I indicate the proportion of respondents who reported “strong” or “very strong” fear, i.e. those with response options 4 and 5. In a second step, associations of society-related fears and political ideology are examined. Based on the respondents’ position on the LRS scale, the paper analyses inasmuch society-related fears vary between individuals who position on the left, on the right or in the center of the ideological spectrum. A one-way ANOVA is applied for each fear to test for significant differences between the three ideological groups (“left”, “center”, “right”). Thirdly, I calculate multiple linear (ML) regression models with personal mental health (PHQ-4) as the dependent variable. The ML regressions assess whether individuals with higher levels of society-related fears report worse mental health. A first model shows whether all societal fears in sum are associated with mental health complaints. For this purpose, regression models were calculated in which a mean score was included that reflects the extent to which a person feels threatened by the eight societal developments examined here. A second regression model then tests for associations between mental health and the single society-related fears. I calculate this regression also separately for individuals who identify as politically left, right and center, as it is likely that ideology affects the relationship between societal developments and personal mental health issues. Both regression models include the above-mentioned sociodemographic control variables. Because these models consider the eight society-related fears simultaneously and thus account for possible correlations between them, I also report zero order correlations for each fear. This allows estimating the effect of each fear on mental health with and without controlling for the influence of the other society-related fears. The regression models document unstandardized and standardized regression estimates (b, β). All data analyses were performed using IBM SPSS 29.

The Proportion of Germans who are Afraid of Specific Societal Crises

The eight society-related problems selected here do indeed arouse fears and concerns in a large proportion of people (Fig.  1 ). The highest fear level is shown for wars and armed conflicts ( M  = 4.12; SD  = 0.94; SE  = 0.030), causing “strong” or “very strong” concerns in 76% of all respondents. Growing social inequality and concerns about poverty also cause fear, with 72% being worried about this issue ( M  = 3.95; SD  = 0.99; SE  = 0.032). Two-thirds (67%) say they are concerned about the rise of right-wing extremism ( M  = 3.95; SD  = 1.25; SE  = 0.040). Crime and terror ( M  = 3.81; SD  = 1.06; SE  = 0.034), as well as immigration and flight to Germany ( M  = 3.73; SD  = 1.17; SE  = 0.038), are also issues of worry to six in ten Germans (64% and 61% respectively). Climate change has a lower average score ( M  = 3.35; SD  = 1.21; SE  = 0.039) and is a concern for half of respondents (49%). The least fearful issues are digitalization and AI ( M  = 3.02; SD  = 1.15; SE  = 0.037) and pandemics and novel pathogens ( M  = 2.82; SD  = 1.13; SE  = 0.036), with a third (33%) and a fourth (26%) of respondents reporting “strong” or “very strong” fear.

Society-related Fears’ Relation to People’s Political Positions

All society-related fears are significantly associated with respondents’ ideological position on the LRS scale, but to varying degrees (Fig.  2 ). People on the left of the ideological spectrum are significantly more worried about the rise of right-wing extremist movements (η²=0.22; p  < .001) and also significantly more worried about the consequences of climate change (η²=0.12; p  < .001). Conversely, people on the political right are far more concerned about migration (η²=0.20; p  < .001) as well as crime and terror (η²=0.07; p  < .001). By comparison, the differences in the other society-related fears are less pronounced: wars and conflicts cause slightly more concern among people on the left (η²=0.03; p  < .001), as do increasing poverty and social inequality (η²=0.02; p  < .001). People on the right are slightly more afraid of the consequences of digitalization and AI (η²=0.01; p  < .04) and slightly less afraid of pandemics (η²=0.01; p  = .04). Overall, it is clear that most people experience certain societal developments as threatening, but as assumed, ideological standpoints play a crucial role in determining which developments and which crises trigger the most fear.

figure 2

Means scores for eight society-related fears depending on individuals’ ideological left-right self-placement. Error bars show 95% confidence interval of the mean. Self-reported scores of 1–4 were considered “left”, scores of 5 and 6 were considered “center”, scores of 7–10 were considered “right”. Data represent the German population 18 + years with Internet access ( N  = 1,001)

Associations of Society-related Fears with Personal Mental Health

A first set of multiple linear regression models examine whether the sum of all society-related worries reported by a respondent is associated with mental health complaints (Table  1 ). Models 1a and 1b both show that personal anxiety and depression scores are significantly associated with society-related worries, also when controls are included ( b  = 0.28; β = 0.24; p  < .001). A 1-unit increase of society-related fears is associated with a 0.28-point increase on the 4-point PHQ scale. The model 1b also suggests that age ( b =-0.01; β=-0.20; p  < .001), income ( b =-0.04; β=-0.14; p  < .001) and being in a relationship ( b =-0.16; β=-0.10; p  < .01) are associated with a lower risk of personal anxiety and depressive tendencies. Migrants report slightly more mental health problems compared to respondents without an immigration background ( b  = 0.21; β = 0.09; p  < .01).

In addition, Model 1c also includes a quadratic term for society-related fear, which is significant and points to a non-linear relationship ( b  = 0.10; β = 0.61; p  = .01). In fact, the level of anxiety and depression increases exponentially the more societal developments a person perceives as threatening, which is illustrated in Fig.  3 .

figure 3

Estimated effects of society-related fears on mental health (PHQ-4) scores. The figures shows the combination of the main effect for society-related fears plus the squared effect, indicating a non-linear relationship. Data represent the German population 18 + years with Internet access ( N  = 1,001)

Further regression analyses explore whether specific societal concerns are associated with mental health problems (Table  2 ). These analyses show significant relations for some variables (Model 2a). Zero order correlations (i.e., without controls) indicate that concerns about poverty and inequality ( r  = .18), digitalization and AI ( r  = .16), pandemics ( r  = .16), war and armed conflicts ( r  = .12), right-wing extremism ( r  = .07) and climate change ( r  = .07) significantly correlate with increased mental health problems. A multiple regression model (i.e., which controls for correlations between society-related fears) points to three significant effects: Worries related to pandemics ( b  = 0.09; β = 0.14; p  < .001), poverty and growing social inequality ( b  = 0.07; β = 0.10; p  < .01), as well as digitalization and AI ( b  = 0.05; β = 0.08; p  = .03) are related to higher anxiety and depression scores.

A model only with individuals who self-place on the left side of the LRS scale (Model 2b) shows that concerns about wars and military conflicts ( b  = 0.14; β = 0.14; p  = .03), poverty and inequality ( b  = 0.14; β = 0.15; p  = .01), pandemics ( b  = 0.11; β = 0.16; p  = .02) and digitalization ( b  = 0.10; β = 0.14; p  = .03) negatively affect mental health. In contrast, a similar model for individuals who position politically in the center indicates that only the fear of pandemics is associated with poor mental health ( b  = 0.07; β = 0.12; p =  .04). Among right-leaning individuals (Model 2d) only worries about digitalization and AI are significantly associated with higher anxiety and depression scores ( b  = 0.15; β = 0.26; p  < .01).

Scientific accounts (Adolph et al., 2016 ; Kovács, 2023 ) and common sense both suggest that mental health problems, such as personal anxiety or depression, can increase in times of multiple crises. Building on theorizing on fear in (post-)modern society (Bauman, 2006 ; Bude, 2014 ; Furedi, 1997 ) this article is meant as a pilot study to explore some empirical relationships between society-related concerns and personal mental health. Results show that Germans were most concerned about societal developments and crises such as wars, social inequalities, right-wing extremism, crime and terror, or immigration. At the time of the survey, a majority of the respondents – between 76% and 61% – regarded these developments as highly worrying. However, societal developments are perceived as worrisome to varying degrees, depending on the individual’s ideological position. Left-leaning individuals are concerned about poverty, a rightward shift in society, or climate change, while right-leaning individuals rather perceive immigration and crime as threats. Most importantly, however, findings reveal that society-related fears are associated with mental health issues. Precisely, the regression models show a non-linear relationship: Mental health issues increase exponentially, the more frightened an individual is by the current societal crises. Hence, particularly in times of “polycrisis” (Lawrence et al., 2024 ), societal fears become a relevant and detrimental factor for human wellbeing.

The results are straightforward when it comes to the finding that people who are more worried about societal trends and conditions report poorer mental health. The results are less clear when it comes to single societal developments. These do not seem to have all the same negative effect on mental well-being. Instead, some fears do not at all correlate with personal mental health, while others are only weakly correlated. Generally, this aligns with the notion of an “intensity continuum” of anxiety (Adolph et al., 2016 ). Building on this idea, it can be postulated that some fears that relate to society and the wider living environment may be perceived as less stressful or less intense compared to fears that relate to something in the immediate personal environment. In addition, some scholars argue that society-related worries, such as immigration-related fears, could translate in personal anger instead of personal fear (Rico, 2024 ). In any case, there is some plasticity in individual reactions towards crises and not every threatening societal development must necessarily translate into a heightened personal experience of anxiety or a reduced level of happiness. Although, however, society-related concerns may not be experienced as intensely as personal fears, they often persist for longer periods. Concerns about the consequences of climate change, for instance, are likely to persist not just a few months, but years and decades. These worries probably lie in the background and not in the foreground of individual experience. It would be valuable thus to gain a more nuanced understanding of the experiential characteristics that differentiate a rather distal and abstract society-related anxiety from a more proximal and concrete anxiety related to the personal level.

Concerning particular fears, it seems that the societal developments that trigger anxieties and potentially impair mental health highly depend on ideologies (Nussbaum, 2018 ). Findings presented here do not support previous studies that showed that ecological concerns generally translate into lower levels of mental health (Hajek & König, 2023 ; Heinzel et al., 2023 ). The findings also do not support the assumption that all societal developments are perceived more threatening by people on the conservative side of the political landscape (Jost et al., 2007 ). By analyzing ideological orientations, this article rather supports the notion that political worldviews shape society-related concerns. For instance, fears about war and poverty are more likely to have a negative effect on mental health among left-leaning individuals, but not among individuals who self-position in the center or at the right of the ideological spectrum. Fear related with the diffusion of AI technologies is also associated with reduced mental wellbeing among left- and right-leaning individuals, but not among those in the political center. Although the present study could not reveal the mechanisms that lead to this effect, it is plausible to assume that concerns about privacy violations and surveillance may play a role (for a discussion see Zuboff, 2022 ) that could trouble particularly individuals with more radical political views on the ideological poles.

In terms of sociodemographic variables, the present study lends support to the previous findings that individuals from lower socioeconomic strata exhibit heightened concern about societal developments (Adolph et al., 2016 ) and that individuals in a relationship display a reduced tendency to worry (Pieh et al., 2020 ). However, the analyses revealed no gender effects. While females have been found to express more fears during the pandemic (Metin et al., 2022 ), the present study suggests that they do not generally worry more about societal developments than males.

This study has strengths and limitations: I consider a strength to be the simultaneous analysis of a number of societal fears (rather than focusing on a single fear) in a coherent approach based on representative data. This allows for comparisons and generalizations. Only few studies have yet provided similar data and analyses (as an exception: Adolph et al., 2016 ). In addition, the inclusion of ideology proved to be worthwhile. However, it is a limitation that the LRS scale captures only one dimension of political ideology and that the eight societal fears analyzed here may not be comprehensive. In particular, it is likely that different crises will manifest in the future, which could lead to different worries and concerns. In particular, the cross-sectional design is a limiting factor, as it allows only robust correlations to be shown. My argument favors the interpretation that societal crises and related fears reduce mental health and well-being. However, the design used here cannot rule out the opposite assumption, that people with mental health problems view societal conditions more negatively and may react more anxiously in times of crisis. Future studies could address some of these limitations, for example by using longitudinal research designs or by including multidimensional measures of political orientations and ideologies.

Building on Bauman’s ( 2006 ) concept of “liquid fear”, one could argue that fear in contemporary Western societies is ephemeral, virtually free-floating and not tied to a specific threat. Hence, fear could move from one current problem to the next. If one takes this argument seriously, questions about fear-inducing social developments could represent only a snapshot in time, which could look different just a few weeks later. Nevertheless, the social crises examined here are by no means only short-term issues: the consequences of climate change, the conflicts between rich and poor, or the disruptions caused by AI will continue to preoccupy humanity for years to come. However, it is unclear whether people will get used to these matters and, at some point, start taking these uncertainties for granted. Future studies that empirically examine the stability or volatility of society-related fears over time are therefore most relevant.

In conclusion, the analysis presented here makes clear that the relationship between societal crises and related fears, on the one hand, and mental health and well-being, on the other, deserves to be studied more closely than it has been. Society-related fears may be important predictors of health, especially in times of great uncertainty and crises of planetary scale. A social science perspective, which no longer sees mental health and well-being solely as an individual characteristic, but rather as embedded in a societal, social and communicative context (e.g., Heins, 2021 ; Hirtenlehner, 2019 ; Romer et al., 2003 ; Williamson et al., 2019 ), could be particularly valuable in this regard.

Abbasi, K., Ali, P., Barbour, V., Benfield, T., Bibbins-Domingo, K., Hancocks, S., et al. (2023). Time to treat the climate and nature crisis as one indivisible global health emergency. The Lancet , 402 (10413), 1603–1606.

Article   Google Scholar  

Adolph, D., Schneider, S., & Margraf, J. (2016). German anxiety barometer – clinical and everyday-life anxieties in the General Population. Frontiers in Psychology , 7 , 01344.

Adzrago, D., Walker, T. J., & Williams, F. (2024). Reliability and validity of the patient health questionnaire-4 scale and its subscales of depression and anxiety among US adults based on nativity. BMC Psychiatry , 24 (1), 213.

Altheide, D. L. (2002). Creating fear. News and the construction of Crisis . Aldine De Gruyter.

Bauman, Z. (2006). Liquid fear . Polity.

Blanc, E. (2023). The EU in motion through emotions: Fear and migration policy in the Euro-Mediterranean context. Mediterranean Politics. Online First, 16.10.2023. https://doi.org/10.1080/13629395.2023.2265258 .

Bloom, N. (2015). Fear of immigration: how has it changed over the last 20 years? URL: https://www.weforum.org/agenda/2015/12/fear-of-immigration-how-has-it-changed-over-the-last-20-years/ (06.08.2024).

Borisch, B. (2023). Should we still talk about crisis? Journal of Public Health Policy , 44 (2), 332–335.

Brand, U. (2009). Die multiple Krise . Heinrich-Böll-Stiftung.

Brenig, M., & Proeger, T. (2018). Putting a price tag on security: Subjective well-being and willingness-to-pay for crime reduction in Europe. Journal of Happiness Studies , 19 (1), 145–166.

Bude, H. (2014). Gesellschaft der Angst . Hamburger Edition.

Bug, M., Kroh, M., & Meier, K. (2015). Regionale kriminalitätsbelastung und kriminalitätsfurcht: Befunde der WISIND-Studie. DIW Wochenbericht , 82 (12), 259–269.

Google Scholar  

Burger, M., Hendriks, M., & Ianchovichina, E. (2023). Economic crises, Subjective Well-Being, and vote switching: The case of Brazil’s 2018 Presidential Election. Journal of Happiness Studies , 24 , 2831–2853.

Charlson, F., van Ommeren, M., Flaxman, A., Cornett, J., Whiteford, H., & Saxena, S. (2019). New WHO prevalence estimates of mental disorders in conflict settings: A systematic review and meta-analysis. Lancet , 394 (10194), 240–248.

Delhey, J., & Dragolov, G. (2014). Why Inequality makes Europeans less happy: The role of distrust, status anxiety, and Perceived Conflict. European Sociological Review , 30 (2), 151–165.

Easterlin, R. A., & O’Connor, K. J. (2023). Three years of COVID-19 and life satisfaction in Europe: A macro view. Proceedings of the National Academy of Sciences of the United States of America , 120 (19), e2300717120.

Fraser, T., & Üngör, M. (2019). Migration Fears, Policy Uncertainty and Economic Activity. University of Otago Economics Discussion Papers , No. 1907.

Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change , 114 , 254–280.

Furedi, F. (1997). Culture of fear: Risk-taking and the morality of low expectation . Cassell.

Gottschick, C., Diexer, S., Massag, J., Klee, B., Broda, A., Purschke, O., Binder, M., Sedding, D., Frese, T., Girndt, M., Hoell, J. I., Michl, P., Gekle, M., & Mikolajczyk, R. (2023). Mental health in Germany in the first weeks of the Russo-Ukrainian war. BJPsych open , 9 (3), e66.

Guerra, O., & Eboreime, E. (2021). The impact of Economic recessions on Depression, anxiety, and trauma-related disorders and illness outcomes – A scoping review. Behavioral Sciences , 11 (9), 119.

Gurinskaya, A., Nalla, M. K., & Polyakova, E. (2024). Does fear of migrant crime predict xenophobia: Evidence from three Russian cities. European Journal of Criminology , 21 (1), 31–51.

Guzelian, C. P. (2004). Liability and fear. Ohio State Law Journal , 65 (4), 713–851.

Hajek, A., & König, H. H. (2022). Political party affinity and fear of conventional and nuclear war in Germany. Psychiatry International, 3 (3), 212–220.

Hajek, A., & König, H. H. (2023). Climate Anxiety and Mental Health in Germany. Climate , 11 , 158.

Hajek, A., Kretzler, B., & König, H. H. (2023a). Fear of war in Germany: An observational study. Heliyon , 9 (11), e21784.

Hajek, A., Kretzler, B., & König, H. H. (2023b). Fear of war and mental health in Germany. Social Psychiatry and Psychiatric Epidemiology , 58 (7), 1049–1054.

Heins, V. M. (2021). The plasticity of our fears: Affective politics in the European Migration Crisis. Society , 58 (6), 500–506.

Heinzel, S., Tschorn, M., Schulte-Hutner, M., Schäfer, F., Reese, G., Pohle, C., Peter, F., Neuber, M., Liu, S., Keller, J., Eichinger, M., & Bechtoldt, M. (2023). Anxiety in response to the climate and environmental crises: Validation of the Hogg Eco-anxiety Scale in Germany. Frontiers in Psychology , 14 , 1239425.

Hinks, T. (2024). Artificial Intelligence perceptions and life satisfaction. Journal of Happiness Studies , 25 , 5.

Hirtenlehner, H. (2006). Kriminalitätsfurcht—ausdruck generalisierter Ängste und schwindender gewissheiten? Untersuchung zur empirischen bewährung der generalisierungsthese in einer österreichischen kommune. Kölner Zeitschrift Für Soziologie und Sozialpsychologie , 58 , 307–331.

Hirtenlehner, H. (2019). Gefährlich sind immer die anderen! Migrationspanik, abstiegsängste und unordnungswahrnehmungen als quelle der Furcht vor importierter kriminalität. Monatsschrift für Kriminologie und Strafrechtsreform , 102 (4), 262–281.

Hodapp, B., & Zwingmann, C. (2019). Religiosity/Spirituality and Mental Health: A Meta-analysis of studies from the german-speaking area. Journal of Religion & Health , 58 (6), 1970–1998.

Infratest dimap (2024). DeutschlandTrend. January 2024. URL: https://www.tagesschau.de/inland/deutschlandtrend/deutschlandtrend-pdf-136.pdf (07.08.2024).

Ipsos (2024). What worries the world? July 2024 . Ipsos.

Jetten, J., Mols, F., & Steffens, N. K. (2021). Prosperous but fearful of falling: The Wealth Paradox, collective angst, and opposition to Immigration. Personality and Social Psychology Bulletin , 47 (5), 766–780.

Jost, J. T., Napier, J. L., Thorisdottir, H., Gosling, S. D., Palfai, T. P., & Ostafin, B. (2007). Are needs to manage uncertainty and threat Associated with Political Conservatism or Ideological Extremity? Personality and Social Psychology Bulletin , 33 (7), 989–1007.

Klingemann, H. D. (1972). Testing the Left-Right Continuum on a sample of German voters. Comparative Political Studies , 5 (1), 93–106.

Knutsen, O. (1998). Europeans move towards the center: A comparative longitudinal study of left-right self-placement in Western Europe. International Journal of Public Opinion Research , 10 (4), 292–316.

Kovács, B. (2023). Documenting the Rise of Anxiety in the United States across Space and Time by Using Text Analysis of Online Review Data. Socius , 9.

Kroenke, K., Spitzer, R. L., Williams, J. B., & Löwe, B. (2009). An ultra-brief screening scale for anxiety and depression: The PHQ–4. Psychosomatics, 50 (6), 613–621.

Lawrence, M., Homer-Dixon, T., Janzwood, S., Rockstöm, J., Renn, O., & Donges, J. F. (2024). Global polycrisis: The causal mechanisms of crisis entanglement. Global Sustainability , 7 , e6.

Lengfeld, H., & Hirschle, J. (2009). Die angst der mittelschicht vor dem sozialen abstieg. Eine Längsschnittanalyse 1984–2007. Zeitschrift für Soziologie , 38 , 379–398.

Li, J., & Huang, J. (2020). Dimensions of artificial intelligence anxiety based on the integrated fear acquisition theory. Technology in Society , 63 , 101410.

Liang, Y., & Lee, S. A. (2017). Fear of Autonomous Robots and Artificial Intelligence: Evidence from National Representative Data with Probability Sampling. International Journal of Social Robotics , 9 (3), 379–384.

Löwe, B., Wahl, I., Rose, M., Spitzer, C., Glaesmer, H., Wingenfeld, K., Schneider, A., & Brähler, E. (2010). A 4-item measure of depression and anxiety: Validation and standardization of the Patient Health Questionnaire-4 (PHQ-4) in the general population. Journal of Affective Disorders , 122 (1–2), 86–95.

Lübke, C. (2019). Leben wir in einer Angstgesellschaft? Die Verbreitung von persönlichen und gesellschaftsbezogenen Sorgen in Deutschland. In C. Lübke & J. Delhey (Eds.), Diagnose Angstgesellschaft? Was wir wirklich über die Gefühlslage der Menschen wissen (pp. 29–58). transcript.

Manning, M., Fleming, C. M., Pham, H. T., & Wong, G. T. W. (2022). What matters more, perceived or real crime? Social Indicators Research , 163 (3), 1221–1248.

Meinlschmidt, G., Stalujanis, E., Grisar, L., Borrmann, M., & Tegethoff, M. (2023). Anticipated fear and anxiety of Automated Driving systems: Estimating the prevalence in a national representative survey. International Journal of Clinical and Health Psychology , 23 (3), 100371.

Metin, A., Erbiçer, E. S., Şen, S., & Çetinkaya, A. (2022). Gender and COVID-19 related fear and anxiety: A meta-analysis. Journal of Affective Disorders , 310 , 384–395.

Nussbaum, M. C. (2018). The monarchy of fear: A philosopher looks at our political crisis . Oxford University Press.

Pickett, K. E., & Wilkinson, R. G. (2015). Income inequality and health: A causal review. Social Science & Medicine , 128 , 316–326.

Pieh, C., O’Rourke, T., Budimir, S., & Probst, T. (2020). Relationship quality and mental health during COVID-19 lockdown. Plos One , 15 (9), e0238906.

Pihkala, P. (2020). Anxiety and the Ecological Crisis: An analysis of Eco-anxiety and Climate anxiety. Sustainability , 12 , 7836.

Pisoiu, D., & Ahmed, R. (2016). Capitalizing on fear: The rise of right-wing populist movements in Western Europe. In IFSH (Ed.), OSCE Yearbook 2015 (pp. 165–176). Nomos.

Powdthavee, N. (2005). Unhappiness and crime: Evidence from South Africa. Economica , 72 (287), 531–547.

R + V Infocenter. (2024). Pressemeldung: Die Ängste der Deutschen. https://www.ruv.de/dam/jcr:72f65ea6-e94b-46a7-8086-392e654b8d80/ruv-studie-aengste-politik-gesellschaft.pdf (06.08.2024).

Renström, E. A., & Bäck H. (2021). Emotions during the Covid-19 pandemic: Fear, anxiety, and anger as mediators between threats and policy support and political actions. Journal of Applied Social Psychology, 51 , 861–877.

Rico, G. (2024). Ideological identification, type of threat, and differences in how anger and fear relate to anti-immigrant and populist attitudes. American Behavioral Scientist , Online First , 25032024. https://doi.org/10.1177/00027642241240344

Romer, D., Jamieson, K. H., & Aday, S. (2003). Television News and the cultivation of fear of crime. Journal of Communication , 53 (1), 88–104.

Salmela, M., & von Scheve, C. (2017). Emotional roots of right-wing political populism. Social Science Information , 56 (4), 567–595.

Schöneck, N. M., Mau, S., & Schupp, J. (2011). Gefühlte Unsicherheit. Deprivationsängste und Abstiegssorgen der Bevölkerung in Deutschland. SOEP Papers on Multidisciplinary Panel Data Research , 428. Berlin: DIW.

Sehrawat, V. (2017). Autonomous weapon system: Law of armed conflict (LOAC) and other legal challenges. Computer Law & Security Review , 33 , 38–56.

Şimşir, Z., Koç, H., Seki, T., & Griffiths, M. D. (2022). The relationship between fear of COVID-19 and mental health problems: A meta-analysis. Death Studies , 46 (3), 515–523.

Steg, J. (2019). Krisen Des Kapitalismus . Campus.

Thomson, E. E., & Roach, S. P. (2023). The relationships among nature connectedness, climate anxiety, climate action, climate knowledge, and mental health. Frontiers in Psychology , 14 , 1241400.

Tudor, A. (2003). A (macro) sociology of fear? The Sociological Review , 51 (2), 238–256.

Vargová, L., Jozefiaková, B., Lačný, M., & Adamkovič, M. (2024). War-related stress scale. BMC Psychology , 12 , 208.

Walby, S. (2015). Crisis . Polity.

Walther, L., Rayes, D., Amann, J., Flick, U., Ta, T. M. T., Hahn, E., & Bajbouj, M. (2021). Mental Health and Integration: A qualitative study on the struggles of recently arrived refugees in Germany. Frontiers in Public Health , 9 , 576481.

WDR (2024). Vertreibungspläne der AfD: Angst, Schock und die Forderung nach Protest. URL: https://www1.wdr.de/nachrichten/afd-vertreibung-correctiv-protest-100.html (06.08.2024).

Williamson, H., Fay, S., & Miles-Johnson, T. (2019). Fear of terrorism: Media exposure and subjective fear of attack. Global Crime , 20 (1), 1–25.

Wolter, K., Chowdhury, S., & Kelly, J. (2009). Design, Conduct, and Analysis of Random-Digit Dialing Surveys. In C.R. Rao (Ed.), Handbook of Statistics. Volume 29 Part A (pp. 125–154). Amsterdam: Elsevier.

Wullenkord, M. C., Tröger, J., Hamann, K. R. S., Loy, L. S., & Reese, G. (2021). Anxiety and climate change: A validation of the climate anxiety scale in a german-speaking quota sample and an investigation of psychological correlates. Climatic Change , 168 , 20.

Yendell, A., & Pickel, G. (2019). Islamophobia and anti-muslim feeling in Saxony – theoretical approaches and empirical findings based on population surveys. Journal of Contemporary European Studies , 28 (1), 85–99.

Zhao, Y., Yin, D., Wang, L., & Yu, Y. (2024). The rise of artificial intelligence, the fall of human wellbeing? International Journal of Social Welfare , 33 (1), 75–105.

Zuboff, S. (2022). Surveillance capitalism or democracy? The Death Match of Institutional orders and the politics of knowledge in our information civilization. Organization Theory , 3 (3).

Download references

Open Access funding enabled and organized by Projekt DEAL.

Author information

Authors and affiliations.

Justus-Liebig-University Giessen, Institute of Sport Science, Kugelberg 62, 35394, Giessen, Germany

Michael Mutz

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Michael Mutz .

Ethics declarations

Conflict of interest.

The author declares no conflict of interest.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Mutz, M. Society-related Fears and Personal Mental Health. Applied Research Quality Life (2024). https://doi.org/10.1007/s11482-024-10367-0

Download citation

Received : 10 May 2024

Accepted : 19 August 2024

Published : 31 August 2024

DOI : https://doi.org/10.1007/s11482-024-10367-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Uncertainty
  • Find a journal
  • Publish with us
  • Track your research

Navigating the future: The impact of generative AI and large language models in healthcare

Shot of a group of medical practitioners having a discussion in a hospital

The healthcare industry is on the verge of a technological transformation, driven by breakthroughs in generative Artificial Intelligence (AI) and Large Language Models (LLMs). These developments are poised to change patient care, research, and administrative effectiveness. Industry leaders are optimists on the technical progress of generative AI and its crucial role in reshaping healthcare delivery and management. Gen-AI technology relies on deep-learning algorithms to create new content such as text, audio, code, and more. It can take unstructured data sets—information that has not been organized according to a preset model, making it difficult to analyze—and analyze them. This represents a potential breakthrough for healthcare operations, which are rich in unstructured data, such as clinical notes, diagnostic images, medical charts, and recordings. Just as humans depend on multiple senses to make decisions, multimodal AI combines various types of data and models to generate more precise, complete insights and predictions reflecting our complex way of interpreting the world.

High angle view of nurse walking around hospital while looking at a medical chart on tablet.

Data-driven approaches to tackling mental health

Joshua ikenna egerson 1, * , idowu oluwayoma adeleke 2 , taiwo akindahunsi 3 , okunjolu folajimi 4 , nana osei safo 5 , oluwaseun ipede 6 and irumba odd immaculate 7.

eISSN: 2581-9615          CODEN(USA): WJARAI          Impact Factor 7.8          GIF Value 90.12

World Journal of Advanced Research and Reviews (WJARR) is licensed under a  Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license.  Permissions beyond the scope of this license may be available at  www.wjarr.com   This site can be best viewed  in modern browser like Google chrome.

Copyright © 2024, World Journal of Advanced Research and Reviews

Designed by VS Infosolution

IMAGES

  1. (PDF) APPLICATIONS OF ARTIFICIAL INTELLIGENCE IN MENTAL HEALTH

    artificial intelligence in mental health care research paper

  2. (PDF) The use of Artificial Intelligence (AI) in mental health

    artificial intelligence in mental health care research paper

  3. AI in Mental Health and Well-being

    artificial intelligence in mental health care research paper

  4. (PDF) Artificial Intelligence–driven mental health and depression treatment

    artificial intelligence in mental health care research paper

  5. (PDF) Application of Artificial Intelligence in Mental Health

    artificial intelligence in mental health care research paper

  6. Artificial Intelligence In Mental Health

    artificial intelligence in mental health care research paper

VIDEO

  1. Expert Talk

  2. Artificial Intelligence

COMMENTS

  1. Artificial Intelligence for Mental Health and Mental Illnesses: An Overview

    Purpose of review: Artificial intelligence (AI) technology holds both great promise to transform mental healthcare and potential pitfalls. This article provides an overview of AI and current applications in healthcare, a review of recent original research on AI specific to mental health, and a discussion of how AI can supplement clinical practice while considering its current limitations ...

  2. Enhancing mental health with Artificial Intelligence: Current trends

    Providing self-led mental health support through an artificial intelligence-powered chat bot (Leora) to Meet the demand of mental health care J. Med. Internet Res. , 25 ( 2023 ) , Article e46448 , 10.2196/46448

  3. Artificial intelligence in positive mental health: a narrative review

    The paper reviews the entire spectrum of Artificial Intelligence (AI) in mental health and its positive role in mental health. AI has a huge number of promises to offer mental health care and this paper looks at multiple facets of the same. The paper first defines AI and its scope in the area of mental health. It then looks at various facets of ...

  4. Accelerating the impact of artificial intelligence in mental healthcare

    The aim of this paper is to identify challenges and opportunities for AI use in mental healthcare and to describe key insights from implementation science of potential relevance to understand and facilitate AI implementation in mental healthcare. ... Artificial intelligence for mental health care: Clinical ... A compilation of strategies for ...

  5. Artificial Intelligence for Mental Health Care: Clinical ...

    Artificial intelligence (AI) is increasingly employed in health care fields such as oncology, radiology, and dermatology. However, the use of AI in mental health care and neurobiological research has been modest. Given the high morbidity and mortality in people with psychiatric disorders, coupled wi …

  6. Systematic review and meta-analysis of AI-based conversational agents

    Conversational artificial intelligence (AI), particularly AI-based conversational agents (CAs), is gaining traction in mental health care. Despite their growing usage, there is a scarcity of ...

  7. Generative artificial intelligence in mental health care: potential

    The potential of artificial intelligence (AI) in health care is being intensively discussed, given the easy accessibility of programs such as ChatGPT. While it is usually acknowledged that this tech-nology will never replace clinicians, we should be aware of immi-nent changes around AI supporting: a) routine ofice work such as billing, b ...

  8. Artificial Intelligence for Mental Health Care: Clinical Applications

    This problem is compounded by a shortage of nearly 4.5 million mental health care providers, including well over 100,000 psychiatrists in the United States (4). Artificial intelligence (AI) presents a potential solution to address this shortage and is increasingly employed in health care fields such as oncology, radiology, and dermatology (5, 6 ...

  9. Explainable artificial intelligence for mental health through

    The literature on artificial intelligence (AI) or machine learning (ML) in mental health and psychiatry lacks consensus on what "explainability" means. In the more general XAI (eXplainable AI ...

  10. The impact of artificial intelligence on the tasks of mental healthcare

    The field of Artificial Intelligence (in general) was approached by three research papers, all of which aimed at understanding perceptions towards the usage of AI in different mental health contexts, an example is Creed et al. (Creed et al., 2022), which studied the perspectives of stakeholders regarding the application of AI for fidelity ...

  11. PDF Artificial Intelligence for Mental Health and Mental ...

    AI in mental health research and clinical care. Keywords Technology .Machinelearning .Naturallanguageprocessing .Deeplearning .Schizophrenia .Depression .Suicide . Bioethics .Researchethics Introduction and Background of Artificial Intelligence in Healthcare We are at a critical point in the fourth industrial age (following

  12. From promise to practice: towards the realisation of AI-informed mental

    In this Series paper, we explore the promises and challenges of artificial intelligence (AI)-based precision medicine tools in mental health care from clinical, ethical, and regulatory perspectives. The real-world implementation of these tools is increasingly considered the prime solution for key issues in mental health, such as delayed, inaccurate, and inefficient care delivery.

  13. An Introduction to Generative Artificial Intelligence in Mental Health

    Purpose of Review This paper provides an overview of generative artificial intelligence (AI) and the possible implications in the delivery of mental health care. Recent Findings Generative AI is a powerful technology that is changing rapidly. As psychiatrists, it is important for us to understand generative AI technology and how it may impact our patients and our practice of medicine. Summary ...

  14. Artificial Intelligence for Mental Health and Mental Illnesses: an

    Purpose of review: Artificial intelligence (AI) technology holds both great promise to transform mental healthcare and potential pitfalls. This article provides an overview of AI and current applications in healthcare, a review of recent original research on AI specific to mental health, and a discussion of how AI can supplement clinical practice while considering its current limitations ...

  15. Artificial intelligence in mental healthcare: an overview and future

    Abstract. Artificial intelligence is disrupting the field of mental healthcare through applications in computational psychiatry, which leverages quantitative techniques to inform our understanding, detection, and treatment of mental illnesses. This paper provides an overview of artificial intelligence technologies in modern mental healthcare ...

  16. Artificial Intelligence for Mental Health and Mental Illnesses: an

    Artificial intelligence refers to the simulation of human intelligence processes by machines, especially computer systems, providing assistance in a variety of patient care and health systems.

  17. The uses and misuses of artificial intelligence in psychiatry: Promises

    In the area of psychiatry, AI can have many promising applications. 1 Machine learning algorithms can analyse large data swathes to identify previously unavailable information and enhance existing knowledge bases. Examples include the analysis of police records to recognise mental health conditions, 2 and dangerous behaviours 3 in possible offenders, predict patient outcomes through health ...

  18. Artificial Intelligence for Mental Healthcare: Clinical Applications

    The global burden of mental illnesses accounts for 32% of years lived with disability, making mental illnesses the first in global burden of disease ().Moreover, mental health challenges have increased in recent decades with a rise in suicides, substance use, and loneliness (), worsened by the Covid-19 pandemic ().Mental healthcare is compounded by a shortage of nearly 4.5 million mental ...

  19. Artificial intelligence in mental health research: new WHO study on

    Using artificial intelligence (AI) in mental health services and research has potential, but a new study finds significant shortcomings that may indicate overly accelerated promotion of new AI models that have yet to be evaluated as viable in the real world. How AI can support mental health services In 2021, over 150 million people in the WHO European Region were living with a mental health ...

  20. Application of Artificial Intelligence in Mental Health

    Artificial Intelligence (AI) helps to take after the decision-making capabilities and simulates the performance of human professionals [].AI in healthcare has been a substantial breakthrough that nevertheless omits important temporal elements of scientific care [].The choices that are made in medical field, and the remedies can affect the future patients who are under observations [].

  21. (PDF) Artificial Intelligence for Chatbots in Mental Health

    Artificial Intelligence for Chatbots in Mental Health: Opportunities and Challenges. Kerstin Denecke, Alaa Abd-Alrazaq, Mowafa Househ. Abstract. The use of chatbots is changing the way health ...

  22. Integrating machine learning and artificial intelligence in life-course

    The integration of machine learning (ML) and artificial intelligence (AI) techniques in life-course epidemiology offers remarkable opportunities to advance our understanding of the complex interplay between biological, social, and environmental factors that shape health trajectories across the lifespan. This perspective summarizes the current applications, discusses future potential and ...

  23. Accelerating the impact of artificial intelligence in mental healthcare

    Implementation science knowledge may be useful for understanding and addressing the challenges of implementing AI in mental healthcare. The aim of this paper is to review the extant literature to identify challenges and opportunities for the use of AI in mental healthcare and describe key insights from implementation science of potential ...

  24. Artificial Intelligence for Mental Health and Mental Illnesses: an

    Purpose of Review Artificial intelligence (AI) technology holds both great promise to transform mental healthcare and potential pitfalls. This article provides an overview of AI and current applications in healthcare, a review of recent original research on AI specific to mental health, and a discussion of how AI can supplement clinical practice while considering its current limitations, areas ...

  25. Artificial intelligence in prediction of mental health disorders

    The coronavirus disease 2019 (COVID-19) pandemic and its immediate aftermath present a serious threat to the mental health of health care workers (HCWs), who may develop elevated rates of anxiety, depression, posttraumatic stress disorder (PTSD), or even suicidal behaviors ().Recent research related to the COVID-19 pandemic (2,3) and 2015 Middle East respiratory syndrome (MERS) outbreak ...

  26. Society-related Fears and Personal Mental Health

    This paper explores the relationship between society-related fears and personal mental health. Respondents of an online survey representing the German population (18 + years) answered how much they are worried about eight societal developments (armed conflicts, social inequality, rise of right-wing extremism, crime and terror, immigration, climate change, artificial intelligence, pandemics ...

  27. Navigating the future: The impact of generative AI and large language

    The healthcare industry is on the verge of a technological transformation, driven by breakthroughs in generative Artificial Intelligence (AI) and Large Language Models (LLMs). These developments are poised to change patient care, research, and administrative effectiveness.

  28. Artificial Intelligence and the Dehumanization of Patient Care

    Artificial intelligence (AI) has gained application in healthcare across various domains of health services, diagnostics, documentation, decision-making, patient care, cancer care, and non-clinical domains like administration, logistics, and management [1], [2].Rapid integration of AI into healthcare is driven by the large amount of data generated in healthcare through electronic health ...

  29. Data-driven approaches to tackling mental health

    The following is an overview of general conclusions from experimental and descriptive secondary research studies published in professional outlets concerning possible advantages and disadvantages of data science applied to important concerns in mental health. Results: Research reveals that integrating subtle e-health tools in tandem with ...