• Search Menu
  • Sign in through your institution
  • Advance articles
  • Author Guidelines
  • Submission Site
  • Open Access
  • Why Publish?
  • About Research Evaluation
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Article Contents

1. introduction, 4. synthesis, 4.1 principles of tdr quality, 5. conclusions, supplementary data, acknowledgements, defining and assessing research quality in a transdisciplinary context.

  • Article contents
  • Figures & tables
  • Supplementary Data

Brian M. Belcher, Katherine E. Rasmussen, Matthew R. Kemshaw, Deborah A. Zornes, Defining and assessing research quality in a transdisciplinary context, Research Evaluation , Volume 25, Issue 1, January 2016, Pages 1–17, https://doi.org/10.1093/reseval/rvv025

  • Permissions Icon Permissions

Research increasingly seeks both to generate knowledge and to contribute to real-world solutions, with strong emphasis on context and social engagement. As boundaries between disciplines are crossed, and as research engages more with stakeholders in complex systems, traditional academic definitions and criteria of research quality are no longer sufficient—there is a need for a parallel evolution of principles and criteria to define and evaluate research quality in a transdisciplinary research (TDR) context. We conducted a systematic review to help answer the question: What are appropriate principles and criteria for defining and assessing TDR quality? Articles were selected and reviewed seeking: arguments for or against expanding definitions of research quality, purposes for research quality evaluation, proposed principles of research quality, proposed criteria for research quality assessment, proposed indicators and measures of research quality, and proposed processes for evaluating TDR. We used the information from the review and our own experience in two research organizations that employ TDR approaches to develop a prototype TDR quality assessment framework, organized as an evaluation rubric. We provide an overview of the relevant literature and summarize the main aspects of TDR quality identified there. Four main principles emerge: relevance, including social significance and applicability; credibility, including criteria of integration and reflexivity, added to traditional criteria of scientific rigor; legitimacy, including criteria of inclusion and fair representation of stakeholder interests, and; effectiveness, with criteria that assess actual or potential contributions to problem solving and social change.

Contemporary research in the social and environmental realms places strong emphasis on achieving ‘impact’. Research programs and projects aim to generate new knowledge but also to promote and facilitate the use of that knowledge to enable change, solve problems, and support innovation ( Clark and Dickson 2003 ). Reductionist and purely disciplinary approaches are being augmented or replaced with holistic approaches that recognize the complex nature of problems and that actively engage within complex systems to contribute to change ‘on the ground’ ( Gibbons et al. 1994 ; Nowotny, Scott and Gibbons 2001 , Nowotny, Scott and Gibbons 2003 ; Klein 2006 ; Hemlin and Rasmussen 2006 ; Chataway, Smith and Wield 2007 ; Erno-Kjolhede and Hansson 2011 ). Emerging fields such as sustainability science have developed out of a need to address complex and urgent real-world problems ( Komiyama and Takeuchi 2006 ). These approaches are inherently applied and transdisciplinary, with explicit goals to contribute to real-world solutions and strong emphasis on context and social engagement ( Kates 2000 ).

While there is an ongoing conceptual and theoretical debate about the nature of the relationship between science and society (e.g. Hessels 2008 ), we take a more practical starting point based on the authors’ experience in two research organizations. The first author has been involved with the Center for International Forestry Research (CIFOR) for almost 20 years. CIFOR, as part of the Consultative Group on International Agricultural Research (CGIAR), began a major transformation in 2010 that shifted the emphasis from a primary focus on delivering high-quality science to a focus on ‘…producing, assembling and delivering, in collaboration with research and development partners, research outputs that are international public goods which will contribute to the solution of significant development problems that have been identified and prioritized with the collaboration of developing countries.’ ( CGIAR 2011 ). It was always intended that CGIAR research would be relevant to priority development and conservation issues, with emphasis on high-quality scientific outputs. The new approach puts much stronger emphasis on welfare and environmental results; research centers, programs, and individual scientists now assume shared responsibility for achieving development outcomes. This requires new ways of working, with more and different kinds of partnerships and more deliberate and strategic engagement in social systems.

Royal Roads University (RRU), the home institute of all four authors, is a relatively new (created in 1995) public university in Canada. It is deliberately interdisciplinary by design, with just two faculties (Faculty of Social and Applied Science; Faculty of Management) and strong emphasis on problem-oriented research. Faculty and student research is typically ‘applied’ in the Organization for Economic Co-operation and Development (2012) sense of ‘original investigation undertaken in order to acquire new knowledge … directed primarily towards a specific practical aim or objective’.

An increasing amount of the research done within both of these organizations can be classified as transdisciplinary research (TDR). TDR crosses disciplinary and institutional boundaries, is context specific, and problem oriented ( Klein 2006 ; Carew and Wickson 2010 ). It combines and blends methodologies from different theoretical paradigms, includes a diversity of both academic and lay actors, and is conducted with a range of research goals, organizational forms, and outputs ( Klein 2006 ; Boix-Mansilla 2006a ; Erno-Kjolhede and Hansson 2011 ). The problem-oriented nature of TDR and the importance placed on societal relevance and engagement are broadly accepted as defining characteristics of TDR ( Carew and Wickson 2010 ).

The experience developing and using TDR approaches at CIFOR and RRU highlights the need for a parallel evolution of principles and criteria for evaluating research quality in a TDR context. Scientists appreciate and often welcome the need and the opportunity to expand the reach of their research, to contribute more effectively to change processes. At the same time, they feel the pressure of added expectations and are looking for guidance.

In any activity, we need principles, guidelines, criteria, or benchmarks that can be used to design the activity, assess its potential, and evaluate its progress and accomplishments. Effective research quality criteria are necessary to guide the funding, management, ongoing development, and advancement of research methods, projects, and programs. The lack of quality criteria to guide and assess research design and performance is seen as hindering the development of transdisciplinary approaches ( Bergmann et al. 2005 ; Feller 2006 ; Chataway, Smith and Wield 2007 ; Ozga 2008 ; Carew and Wickson 2010 ; Jahn and Keil 2015 ). Appropriate quality evaluation is essential to ensure that research receives support and funding, and to guide and train researchers and managers to realize high-quality research ( Boix-Mansilla 2006a ; Klein 2008 ; Aagaard-Hansen and Svedin 2009 ; Carew and Wickson 2010 ).

Traditional disciplinary research is built on well-established methodological and epistemological principles and practices. Within disciplinary research, quality has been defined narrowly, with the primary criteria being scientific excellence and scientific relevance ( Feller 2006 ; Chataway, Smith and Wield 2007 ; Erno-Kjolhede and Hansson 2011 ). Disciplines have well-established (often implicit) criteria and processes for the evaluation of quality in research design ( Erno-Kjolhede and Hansson 2011 ). TDR that is highly context specific, problem oriented, and includes nonacademic societal actors in the research process is challenging to evaluate ( Wickson, Carew and Russell 2006 ; Aagaard-Hansen and Svedin 2009 ; Andrén 2010 ; Carew and Wickson 2010 ; Huutoniemi 2010 ). There is no one definition or understanding of what constitutes quality, nor a set guide for how to do TDR ( Lincoln 1995 ; Morrow 2005 ; Oberg 2008 ; Andrén 2010 ; Huutoniemi 2010 ). When epistemologies and methods from more than one discipline are used, disciplinary criteria may be insufficient and criteria from more than one discipline may be contradictory; cultural conflicts can arise as a range of actors use different terminology for the same concepts or the same terminology for different concepts ( Chataway, Smith and Wield 2007 ; Oberg 2008 ).

Current research evaluation approaches as applied to individual researchers, programs, and research units are still based primarily on measures of academic outputs (publications and the prestige of the publishing journal), citations, and peer assessment ( Boix-Mansilla 2006a ; Feller 2006 ; Erno-Kjolhede and Hansson 2011 ). While these indicators of research quality remain relevant, additional criteria are needed to address the innovative approaches and the diversity of actors, outputs, outcomes, and long-term social impacts of TDR. It can be difficult to find appropriate outlets for TDR publications simply because the research does not meet the expectations of traditional discipline-oriented journals. Moreover, a wider range of inputs and of outputs means that TDR may result in fewer academic outputs. This has negative implications for transdisciplinary researchers, whose performance appraisals and long-term career progression are largely governed by traditional publication and citation-based metrics of evaluation. Research managers, peer reviewers, academic committees, and granting agencies all struggle with how to evaluate and how to compare TDR projects ( ex ante or ex post ) in the absence of appropriate criteria to address epistemological and methodological variability. The extent of engagement of stakeholders 1 in the research process will vary by project, from information sharing through to active collaboration ( Brandt et al. 2013) , but at any level, the involvement of stakeholders adds complexity to the conceptualization of quality. We need to know what ‘good research’ is in a transdisciplinary context.

As Tijssen ( 2003 : 93) put it: ‘Clearly, in view of its strategic and policy relevance, developing and producing generally acceptable measures of “research excellence” is one of the chief evaluation challenges of the years to come’. Clear criteria are needed for research quality evaluation to foster excellence while supporting innovation: ‘A principal barrier to a broader uptake of TD research is a lack of clarity on what good quality TD research looks like’ ( Carew and Wickson 2010 : 1154). In the absence of alternatives, many evaluators, including funding bodies, rely on conventional, discipline-specific measures of quality which do not address important aspects of TDR.

There is an emerging literature that reviews, synthesizes, or empirically evaluates knowledge and best practice in research evaluation in a TDR context and that proposes criteria and evaluation approaches ( Defila and Di Giulio 1999 ; Bergmann et al. 2005 ; Wickson, Carew and Russell 2006 ; Klein 2008 ; Carew and Wickson 2010 ; ERIC 2010; de Jong et al. 2011 ; Spaapen and Van Drooge 2011 ). Much of it comes from a few fields, including health care, education, and evaluation; little comes from the natural resource management and sustainability science realms, despite these areas needing guidance. National-scale reviews have begun to recognize the need for broader research evaluation criteria but have had difficulty dealing with it and have made little progress in addressing it ( Donovan 2008 ; KNAW 2009 ; REF 2011 ; ARC 2012 ; TEC 2012 ). A summary of the national reviews that we reviewed in the development of this research is provided in Supplementary Appendix 1 . While there are some published evaluation schemes for TDR and interdisciplinary research (IDR), there is ‘substantial variation in the balance different authors achieve between comprehensiveness and over-prescription’ ( Wickson and Carew 2014 : 256) and still a need to develop standardized quality criteria that are ‘uniquely flexible to provide valid, reliable means to evaluate and compare projects, while not stifling the evolution and responsiveness of the approach’ ( Wickson and Carew 2014 : 256).

There is a need and an opportunity to synthesize current ideas about how to define and assess quality in TDR. To address this, we conducted a systematic review of the literature that discusses the definitions of research quality as well as the suggested principles and criteria for assessing TDR quality. The aim is to identify appropriate principles and criteria for defining and measuring research quality in a transdisciplinary context and to organize those principles and criteria as an evaluation framework.

The review question was: What are appropriate principles, criteria, and indicators for defining and assessing research quality in TDR?

This article presents the method used for the systematic review and our synthesis, followed by key findings. Theoretical concepts about why new principles and criteria are needed for TDR, along with associated discussions about evaluation process are presented. A framework, derived from our synthesis of the literature, of principles and criteria for TDR quality evaluation is presented along with guidance on its application. Finally, recommendations for next steps in this research and needs for future research are discussed.

2.1 Systematic review

Systematic review is a rigorous, transparent, and replicable methodology that has become widely used to inform evidence-based policy, management, and decision making ( Pullin and Stewart 2006 ; CEE 2010). Systematic reviews follow a detailed protocol with explicit inclusion and exclusion criteria to ensure a repeatable and comprehensive review of the target literature. Review protocols are shared and often published as peer reviewed articles before undertaking the review to invite critique and suggestions. Systematic reviews are most commonly used to synthesize knowledge on an empirical question by collating data and analyses from a series of comparable studies, though methods used in systematic reviews are continually evolving and are increasingly being developed to explore a wider diversity of questions ( Chandler 2014 ). The current study question is theoretical and methodological, not empirical. Nevertheless, with a diverse and diffuse literature on the quality of TDR, a systematic review approach provides a method for a thorough and rigorous review. The protocol is published and available at http://www.cifor.org/online-library/browse/view-publication/publication/4382.html . A schematic diagram of the systematic review process is presented in Fig. 1 .

Search process.

Search process.

2.2 Search terms

Search terms were designed to identify publications that discuss the evaluation or assessment of quality or excellence 2 of research 3 that is done in a TDR context. Search terms are listed online in Supplementary Appendices 2 and 3 . The search strategy favored sensitivity over specificity to ensure that we captured the relevant information.

2.3 Databases searched

ISI Web of Knowledge (WoK) and Scopus were searched between 26 June 2013 and 6 August 2013. The combined searches yielded 15,613 unique citations. Additional searches to update the first searchers were carried out in June 2014 and March 2015, for a total of 19,402 titles scanned. Google Scholar (GS) was searched separately by two reviewers during each search period. The first reviewer’s search was done on 2 September 2013 (Search 1) and 3 September 2013 (Search 2), yielding 739 and 745 titles, respectively. The second reviewer’s search was done on 19 November 2013 (Search 1) and 25 November 2013 (Search 2), yielding 769 and 774 titles, respectively. A third search done on 17 March 2015 by one reviewer yielded 98 new titles. Reviewers found high redundancy between the WoK/Scopus searches and the GS searches.

2.4 Targeted journal searches

Highly relevant journals, including Research Evaluation, Evaluation and Program Planning, Scientometrics, Research Policy, Futures, American Journal of Evaluation, Evaluation Review, and Evaluation, were comprehensively searched using broader, more inclusive search strings that would have been unmanageable for the main database search.

2.5 Supplementary searches

References in included articles were reviewed to identify additional relevant literature. td-net’s ‘Tour d’Horizon of Literature’, lists important inter- and transdisciplinary publications collected through an invitation to experts in the field to submit publications ( td-net 2014 ). Six additional articles were identified via supplementary search.

2.6 Limitations of coverage

The review was limited to English-language published articles and material available through internet searches. There was no systematic way to search the gray (unpublished) literature, but relevant material identified through supplementary searches was included.

2.7 Inclusion of articles

This study sought articles that review, critique, discuss, and/or propose principles, criteria, indicators, and/or measures for the evaluation of quality relevant to TDR. As noted, this yielded a large number of titles. We then selected only those articles with an explicit focus on the meaning of IDR and/or TDR quality and how to achieve, measure or evaluate it. Inclusion and exclusion criteria were developed through an iterative process of trial article screening and discussion within the research team. Through this process, inter-reviewer agreement was tested and strengthened. Inclusion criteria are listed in Tables 1 and 2 .

Inclusion criteria for title and abstract screening

Topic coverage
Document type
GeographicNo geographic barriers
DateNo temporal barriers
Discipline/fieldDiscussion must be relevant to environment, natural resources management, sustainability, livelihoods, or related areas of human–environmental interactionsThe discussion need not explicitly reference any of the above subject areas
Topic coverage
Document type
GeographicNo geographic barriers
DateNo temporal barriers
Discipline/fieldDiscussion must be relevant to environment, natural resources management, sustainability, livelihoods, or related areas of human–environmental interactionsThe discussion need not explicitly reference any of the above subject areas

Inclusion criteria for abstract and full article screening

ThemeInclusion criteria
Relevance to review objectives (all articles must meet this criteria)Intention of article, or part of article, is to discuss the meaning of research quality and how to measure/evaluate it
Theoretical discussion
Quality definitions and criteriaOffers an explicit definition or criteria of inter and/or transdisciplinary research quality
Evaluation processSuggests approaches to evaluate inter and/or transdisciplinary research quality. (will only be included if there is relevant discussion of research quality criteria and/or measurement)
Research ‘impact’Discusses research outcomes (diffusion, uptake, utilization, impact) as an indicator or consequence of research quality.
ThemeInclusion criteria
Relevance to review objectives (all articles must meet this criteria)Intention of article, or part of article, is to discuss the meaning of research quality and how to measure/evaluate it
Theoretical discussion
Quality definitions and criteriaOffers an explicit definition or criteria of inter and/or transdisciplinary research quality
Evaluation processSuggests approaches to evaluate inter and/or transdisciplinary research quality. (will only be included if there is relevant discussion of research quality criteria and/or measurement)
Research ‘impact’Discusses research outcomes (diffusion, uptake, utilization, impact) as an indicator or consequence of research quality.

Article screening was done in parallel by two reviewers in three rounds: (1) title, (2) abstract, and (3) full article. In cases of uncertainty, papers were included to the next round. Final decisions on inclusion of contested papers were made by consensus among the four team members.

2.8 Critical appraisal

In typical systematic reviews, individual articles are appraised to ensure that they are adequate for answering the research question and to assess the methods of each study for susceptibility to bias that could influence the outcome of the review (Petticrew and Roberts 2006). Most papers included in this review are theoretical and methodological papers, not empirical studies. Most do not have explicit methods that can be appraised with existing quality assessment frameworks. Our critical appraisal considered four criteria adapted from Spencer et al. (2003): (1) relevance to the review question, (2) clarity and logic of how information in the paper was generated, (3) significance of the contribution (are new ideas offered?), and (4) generalizability (is the context specified; do the ideas apply in other contexts?). Disagreements were discussed to reach consensus.

2.9 Data extraction and management

The review sought information on: arguments for or against expanding definitions of research quality, purposes for research quality evaluation, principles of research quality, criteria for research quality assessment, indicators and measures of research quality, and processes for evaluating TDR. Four reviewers independently extracted data from selected articles using the parameters listed in Supplementary Appendix 4 .

2.10 Data synthesis and TDR framework design

Our aim was to synthesize ideas, definitions, and recommendations for TDR quality criteria into a comprehensive and generalizable framework for the evaluation of quality in TDR. Key ideas were extracted from each article and summarized in an Excel database. We classified these ideas into themes and ultimately into overarching principles and associated criteria of TDR quality organized as a rubric ( Wickson and Carew 2014 ). Definitions of each principle and criterion were developed and rubric statements formulated based on the literature and our experience. These criteria (adjusted appropriately to be applied ex ante or ex post ) are intended to be used to assess a TDR project. The reviewer should consider whether the project fully satisfies, partially satisfies, or fails to satisfy each criterion. More information on application is provided in Section 4.3 below.

We tested the framework on a set of completed RRU graduate theses that used transdisciplinary approaches, with an explicit problem orientation and intent to contribute to social or environmental change. Three rounds of testing were done, with revisions after each round to refine and improve the framework.

3.1 Overview of the selected articles

Thirty-eight papers satisfied the inclusion criteria. A wide range of terms are used in the selected papers, including: cross-disciplinary; interdisciplinary; transdisciplinary; methodological pluralism; mode 2; triple helix; and supradisciplinary. Eight included papers specifically focused on sustainability science or TDR in natural resource management, or identified sustainability research as a growing TDR field that needs new forms of evaluation ( Cash et al. 2002 ; Bergmann et al. 2005 ; Chataway, Smith and Wield 2007 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Andrén 2010 ; Carew and Wickson 2010 ; Lang et al. 2012 ; Gaziulusoy and Boyle 2013 ). Carew and Wickson (2010) build on the experience in the TDR realm to propose criteria and indicators of quality for ‘responsible research and innovation’.

The selected articles are written from three main perspectives. One set is primarily interested in advancing TDR approaches. These papers recognize the need for new quality measures to encourage and promote high-quality research and to overcome perceived biases against TDR approaches in research funding and publishing. A second set of papers is written from an evaluation perspective, with a focus on improving evaluation of TDR. The third set is written from the perspective of qualitative research characterized by methodological pluralism, with many characteristics and issues relevant to TDR approaches.

The majority of the articles focus at the project scale, some at the organization level, and some do not specify. Some articles explicitly focus on ex ante evaluation (e.g. proposal evaluation), others on ex post evaluation, and many are not explicit about the project stage they are concerned with. The methods used in the reviewed articles include authors’ reflection and opinion, literature review, expert consultation, document analysis, and case study. Summaries of report characteristics are available online ( Supplementary Appendices 5–8 ). Eight articles provide comprehensive evaluation frameworks and quality criteria specifically for TDR and research-in-context. The rest of the articles discuss aspects of quality related to TDR and recommend quality definitions, criteria, and/or evaluation processes.

3.2 The need for quality criteria and evaluation methods for TDR

Many of the selected articles highlight the lack of widely agreed principles and criteria of TDR quality. They note that, in the absence of TDR quality frameworks, disciplinary criteria are used ( Morrow 2005 ; Boix-Mansilla 2006a , b ; Feller 2006 ; Klein 2006 , 2008 ; Wickson, Carew and Russell 2006 ; Scott 2007 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Oberg 2008 ; Erno-Kjolhede and Hansson 2011 ), and evaluations are often carried out by reviewers who lack cross-disciplinary experience and do not have a shared understanding of quality ( Aagaard-Hansen and Svedin 2009 ). Quality is discussed by many as a relative concept, developed within disciplines, and therefore defined and understood differently in each field ( Morrow 2005 ; Klein 2006 ; Oberg 2008 ; Mitchell and Willets 2009 ; Huutoniemi 2010 ; Hellstrom 2011 ). Jahn and Keil (2015) point out the difficulty of creating a common set of quality criteria for TDR in the absence of a standard agreed-upon definition of TDR. Many of the selected papers argue the need to move beyond narrowly defined ideas of ‘scientific excellence’ to incorporate a broader assessment of quality which includes societal relevance ( Hemlin and Rasmussen 2006 ; Chataway, Smith and Wield 2007 ; Ozga 2007 ; Spaapen, Dijstelbloem and Wamelink 2007 ). This shift includes greater focus on research organization, research process, and continuous learning, rather than primarily on research outputs ( Hemlin and Rasmussen 2006 ; de Jong et al. 2011 ; Wickson and Carew 2014 ; Jahn and Keil 2015 ). This responds to and reflects societal expectations that research should be accountable and have demonstrated utility ( Cloete 1997 ; Defila and Di Giulio 1999 ; Wickson, Carew and Russell 2006 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Stige 2009 ).

A central aim of TDR is to achieve socially relevant outcomes, and TDR quality criteria should demonstrate accountability to society ( Cloete 1997 ; Hemlin and Rasmussen 2006 ; Chataway, Smith and Wield 2007 ; Ozga 2007 ; Spaapen, Dijstelbloem and Wamelink 2007 ; de Jong et al. 2011 ). Integration and mutual learning are a core element of TDR; it is not enough to transcend boundaries and incorporate societal knowledge but, as Carew and Wickson ( 2010 : 1147) summarize: ‘…the TD researcher needs to put effort into integrating these potentially disparate knowledges with a view to creating useable knowledge. That is, knowledge that can be applied in a given problem context and has some prospect of producing desired change in that context’. The inclusion of societal actors in the research process, the unique and often dispersed organization of research teams, and the deliberate integration of different traditions of knowledge production all fall outside of conventional assessment criteria ( Feller 2006 ).

Not only do the range of criteria need to be updated, expanded, agreed upon, and assumptions made explicit ( Boix-Mansilla 2006a ; Klein 2006 ; Scott 2007 ) but, given the specific problem orientation of TDR, reviewers beyond disciplinary academic peers need to be included in the assessment of quality ( Cloete 1997 ; Scott 2007 ; Spappen et al. 2007 ; Klein 2008 ). Several authors discuss the lack of reviewers with strong cross-disciplinary experience ( Aagaard-Hansen and Svedin 2009 ) and the lack of common criteria, philosophical foundations, and language for use by peer reviewers ( Klein 2008 ; Aagaard-Hansen and Svedin 2009 ). Peer review of TDR could be improved with explicit TDR quality criteria, and appropriate processes in place to ensure clear dialog between reviewers.

Finally, there is the need for increased emphasis on evaluation as part of the research process ( Bergmann et al. 2005 ; Hemlin and Rasmussen 2006 ; Meyrick 2006 ; Chataway, Smith and Wield 2007 ; Stige, Malterud and Midtgarden 2009 ; Hellstrom 2011 ; Lang et al. 2012 ; Wickson and Carew 2014 ). This is particularly true in large, complex, problem-oriented research projects. Ongoing monitoring of the research organization and process contributes to learning and adaptive management while research is underway and so helps improve quality. As stated by Wickson and Carew ( 2014 : 262): ‘We believe that in any process of interpreting, rearranging and/or applying these criteria, open negotiation on their meaning and application would only positively foster transformative learning, which is a valued outcome of good TD processes’.

3.3 TDR quality criteria and assessment approaches

Many of the papers provide quality criteria and/or describe constituent parts of quality. Aagaard-Hansen and Svedin (2009) define three key aspects of quality: societal relevance, impact, and integration. Meyrick (2006) states that quality research is transparent and systematic. Boaz and Ashby (2003) describe quality in four dimensions: methodological quality, quality of reporting, appropriateness of methods, and relevance to policy and practice. Although each article deconstructs quality in different ways and with different foci and perspectives, there is significant overlap and recurring themes in the papers reviewed. There is a broadly shared perspective that TDR quality is a multidimensional concept shaped by the specific context within which research is done ( Spaapen, Dijstelbloem and Wamelink 2007 ; Klein 2008 ), making a universal definition of TDR quality difficult or impossible ( Huutoniemi 2010 ).

Huutoniemi (2010) identifies three main approaches to conceptualizing quality in IDR and TDR: (1) using existing disciplinary standards adapted as necessary for IDR; (2) building on the quality standards of disciplines while fundamentally incorporating ways to deal with epistemological integration, problem focus, context, stakeholders, and process; and (3) radical departure from any disciplinary orientation in favor of external, emergent, context-dependent quality criteria that are defined and enacted collaboratively by a community of users.

The first approach is prominent in current research funding and evaluation protocols. Conservative approaches of this kind are criticized for privileging disciplinary research and for failing to provide guidance and quality control for transdisciplinary projects. The third approach would ‘undermine the prevailing status of disciplinary standards in the pursuit of a non-disciplinary, integrated knowledge system’ ( Huutoniemi 2010 : 313). No predetermined quality criteria are offered, only contextually embedded criteria that need to be developed within a specific research project. To some extent, this is the approach taken by Spaapen, Dijstelbloem and Wamelink (2007) and de Jong et al. (2011) . Such a sui generis approach cannot be used to compare across projects. Most of the reviewed papers take the second approach, and recommend TDR quality criteria that build on a disciplinary base.

Eight articles present comprehensive frameworks for quality evaluation, each with a unique approach, perspective, and goal. Two of these build comprehensive lists of criteria with associated questions to be chosen based on the needs of the particular research project ( Defila and Di Giulio 1999 ; Bergmann et al. 2005 ). Wickson and Carew (2014) develop a reflective heuristic tool with questions to guide researchers through ongoing self-evaluation. They also list criteria for external evaluation and to compare between projects. Spaapen, Dijstelbloem and Wamelink (2007) design an approach to evaluate a research project against its own goals and is not meant to compare between projects. Wickson and Carew (2014) developed a comprehensive rubric for the evaluation of Research and Innovation that builds of their extensive previous work in TDR. Finally, Lang et al. (2012) , Mitchell and Willets (2009) , and Jahn and Keil (2015) develop criteria checklists that can be applied across transdisciplinary projects.

Bergmann et al. (2005) and Carew and Wickson (2010) organize their frameworks into managerial elements of the research project, concerning problem context, participation, management, and outcomes. Lang et al. (2012) and Defila and Di Giulio (1999) focus on the chronological stages in the research process and identify criteria at each stage. Mitchell and Willets (2009) , , with a focus on doctoral s tudies, adapt standard dissertation evaluation criteria to accommodate broader, pluralistic, and more complex studies. Spaapen, Dijstelbloem and Wamelink (2007) focus on evaluating ‘research-in-context’. Wickson and Carew (2014) created a rubric based on criteria that span the research process, stages, and all actors included. Jahn and Keil (2015) organized their quality criteria into three categories of quality including: quality of the research problems, quality of the research process, and quality of the research results.

The remaining papers highlight key themes that must be considered in TDR evaluation. Dominant themes include: engagement with problem context, collaboration and inclusion of stakeholders, heightened need for explicit communication and reflection, integration of epistemologies, recognition of diverse outputs, the focus on having an impact, and reflexivity and adaptation throughout the process. The focus on societal problems in context and the increased engagement of stakeholders in the research process introduces higher levels of complexity that cannot be accommodated by disciplinary standards ( Defila and Di Giulio 1999 ; Bergmann et al. 2005 ; Wickson, Carew and Russell 2006 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Klein 2008 ).

Finally, authors discuss process ( Defila and Di Giulio 1999 ; Bergmann et al. 2005 ; Boix-Mansilla 2006b ; Spaapen, Dijstelbloem and Wamelink 2007 ) and utilitarian values ( Hemlin 2006 ; Ernø-Kjølhede and Hansson 2011 ; Bornmann 2013 ) as essential aspects of quality in TDR. Common themes include: (1) the importance of formative and process-oriented evaluation ( Bergmann et al. 2005 ; Hemlin 2006 ; Stige 2009 ); (2) emphasis on the evaluation process itself (not just criteria or outcomes) and reflexive dialog for learning ( Bergmann et al. 2005 ; Boix-Mansilla 2006b ; Klein 2008 ; Oberg 2008 ; Stige, Malterud and Midtgarden 2009 ; Aagaard-Hansen and Svedin 2009 ; Carew and Wickson 2010 ; Huutoniemi 2010 ); (3) the need for peers who are experienced and knowledgeable about TDR for fair peer review ( Boix-Mansilla 2006a , b ; Klein 2006 ; Hemlin 2006 ; Scott 2007 ; Aagaard-Hansen and Svedin 2009 ); (4) the inclusion of stakeholders in the evaluation process ( Bergmann et al. 2005 ; Scott 2007 ; Andréen 2010 ); and (5) the importance of evaluations that are built in-context ( Defila and Di Giulio 1999 ; Feller 2006 ; Spaapen, Dijstelbloem and Wamelink 2007 ; de Jong et al. 2011 ).

While each reviewed approach offers helpful insights, none adequately fulfills the need for a broad and adaptable framework for assessing TDR quality. Wickson and Carew ( 2014 : 257) highlight the need for quality criteria that achieve balance between ‘comprehensiveness and over-prescription’: ‘any emerging quality criteria need to be concrete enough to provide real guidance but flexible enough to adapt to the specificities of varying contexts’. Based on our experience, such a framework should be:

Comprehensive: It should accommodate the main aspects of TDR, as identified in the review.

Time/phase adaptable: It should be applicable across the project cycle.

Scalable: It should be useful for projects of different scales.

Versatile: It should be useful to researchers and collaborators as a guide to research design and management, and to internal and external reviews and assessors.

Comparable: It should allow comparison of quality between and across projects/programs.

Reflexive: It should encourage and facilitate self-reflection and adaptation based on ongoing learning.

In this section, we synthesize the key principles and criteria of quality in TDR that were identified in the reviewed literature. Principles are the essential elements of high-quality TDR. Criteria are the conditions that need to be met in order to achieve a principle. We conclude by providing a framework for the evaluation of quality in TDR ( Table 3 ) and guidance for its application.

Transdisciplinary research quality assessment framework

CriteriaDefinitionRubric scale
Clearly defined socio-ecological contextThe context is well defined and described and analyzed sufficiently to identify research entry points.The context is well defined, described, and analyzed sufficiently to identify research entry points.
Socially relevant research problem Research problem is relevant to the problem context. The research problem is defined and framed in a way that clearly shows its relevance to the context and that demonstrates that consideration has been given to the practical application of research activities and outputs.
Engagement with problem context Researchers demonstrate appropriate breadth and depth of understanding of and sufficient interaction with the problem context. The documentation demonstrates that the researcher/team has interacted appropriately and sufficiently with the problem context to understand it and to have potential to influence it (e.g. through site visits, meeting participation, discussion with stakeholders, document review) in planning and implementing the research.
Explicit theory of changeThe research explicitly identifies its main intended outcomes and how they are intended/expected to be realized and to contribute to longer-term outcomes and/or impacts.The research explicitly identifies its main intended outcomes and how they are intended/expected to be realized and to contribute to longer-term outcomes and/or impacts.
Relevant research objectives and designThe research objectives and design are relevant, timely, and appropriate to the problem context, including attention to stakeholder needs and values.The documentation clearly demonstrates, through sufficient analysis of key factors, needs, and complexity within the context, that the research objectives and design are relevant and appropriate.
Appropriate project implementationResearch execution is suitable to the problem context and the socially relevant research objectives.The documentation reflects effective project implementation that is appropriate to the context, with reflection and adaptation as needed.
Effective communication Communication during and after the research process is appropriate to the context and accessible to stakeholders, users, and other intended audiences The documentation indicates that the research project planned and achieved appropriate communications with all necessary actors during the research process.
Broad preparationThe research is based on a strong integrated theoretical and empirical foundation that is relevant to the context.The documentation demonstrates critical understanding of an appropriate breadth and depth of literature and theory from across disciplines relevant to the context, and of the context itself
Clear research problem definitionThe research problem is clearly defined, researchable, grounded in the academic literature, and relevant to the context.The research problem is clearly stated and defined, researchable, and grounded in the academic literature and the problem context.
Objectives stated and metResearch objectives are clearly stated.The research objectives are clearly stated, logically and appropriately related to the context and the research problem, and achieved, with any necessary adaptation explained.
Feasible research projectThe research design and resources are appropriate and sufficient to meet the objectives as stated, and sufficiently resilient to adapt to unexpected opportunities and challenges throughout the research process.The research design and resources are appropriate and sufficient to meet the objectives as stated, and sufficiently resilient to adapt to unexpected opportunities and challenges throughout the research process.
Adequate competenciesThe skills and competencies of the researcher/team/collaboration (including academic and societal actors) are sufficient and in appropriate balance (without unnecessary complexity) to succeed.The documentation recognizes the limitations and biases of individuals’ knowledge and identifies the knowledge, skills, and expertise needed to carry out the research and provides evidence that they are represented in the research team in the appropriate measure to address the problem.
Research approach fits purposeDisciplines, perspectives, epistemologies, approaches, and theories are combined appropriately to create an approach that is appropriate to the research problem and the objectivesThe documentation explicitly states the rationale for the inclusion and integration of different epistemologies, disciplines, and methodologies, justifies the approach taken in reference to the context, and discusses the process of integration, including how paradoxes and conflicts were managed.
Appropriate methodsMethods are fit to purpose and well-suited to answering the research questions and achieving the objectives.Methods are clearly described, and documentation demonstrates that the methods are fit to purpose, systematic yet adaptable, and transparent. Novel (unproven) methods or adaptations are justified and explained, including why they were used and how they maintain scientific rigor.
Clearly presented argumentThe movement from analysis through interpretation to conclusions is transparently and logically described. Sufficient evidence is provided to clearly demonstrate the relationship between evidence and conclusions.Results are clearly presented. Analyses and interpretations are adequately explained, with clearly described terminology and full exposition of the logic leading to conclusions, including exploration of possible alternate explanations.
Transferability/generalizability of research findingsAppropriate and rigorous methods ensure the study’s findings are externally valid (generalizable). In some cases, findings may be too context specific to be generalizable in which case research would be judged on its ability to act as a model for future research.Document clearly explains how the research findings are transferable to other contexts OR, in cases that are too context-specific to be generalizable, discusses aspects of the research process or findings that may be transferable to other contexts and/or used as learning cases.
Limitations statedResearchers engage in ongoing individual and collective reflection in order to explicitly acknowledge and address limitations.Limitations are clearly stated and adequately accounted for on an ongoing basis through the research project.
Ongoing monitoring and reflexivity Researchers engage in ongoing reflection and adaptation of the research process, making changes as new obstacles, opportunities, circumstances, and/or knowledge surface.Processes of reflection, individually and as a research team, are clearly documented throughout the research process along with clear descriptions and justifications for any changes to the research process made as a result of reflection.
Disclosure of perspectiveActual, perceived, and potential bias is clearly stated and accounted for. This includes aspects of: researchers’ position, sources of support, financing, collaborations, partnerships, research mandate, assumptions, goals, and bounds placed on commissioned research.The documentation identifies potential or actual bias, including aspects of researchers’ positions, sources of support, financing, collaborations, partnerships, research mandate, assumptions, goals, and bounds placed on commissioned research.
Effective collaborationAppropriate processes are in place to ensure effective collaboration (e.g. clear and explicit roles and responsibilities agreed upon, transparent and appropriate decision-making structures)The documentation explicitly discusses the collaboration process, with adequate demonstration that the opportunities and process for collaboration are appropriate to the context and the actors involved (e.g. clear and explicit roles and responsibilities agreed upon, transparent and appropriate decision-making structures)
Genuine and explicit inclusionInclusion of diverse actors in the research process is clearly defined. Representation of actors' perspectives, values, and unique contexts is ensured through adequate planning, explicit agreements, communal reflection, and reflexivity.The documentation explains the range of participants and perspectives/cultural backgrounds involved, clearly describes what steps were taken to ensure the respectful inclusion of diverse actors/views, and explains the roles and contributions of all participants in the research process.
Research is ethicalResearch adheres to standards of ethical conduct.The documentation describes the ethical review process followed and, considering the full range of stakeholders, explicitly identifies any ethical challenges and how they were resolved.
Research builds social capacityChange takes place in individuals, groups, and at the institutional level through shared learning. This can manifest as a change in knowledge, understanding, and/or perspective of participants in the research project. There is evidence of observed changes in knowledge, behavior, understanding, and/or perspectives of research participants and/or stakeholders as a result of the research process and/or findings.
Contribution to knowledgeResearch contributes to knowledge and understanding in academic and social realms in a timely, relevant, and significant way.There is evidence that knowledge created through the project is being/has been used by intended audiences and end-users.
Practical applicationResearch has a practical application. The findings, process, and/or products of research are used.There is evidence that innovations developed through the research and/or the research process have been (or will be applied) in the real world.
Significant outcomeResearch contributes to the solution of the targeted problem or provides unexpected solutions to other problems. This can include a variety of outcomes: building societal capacity, learning, use of research products, and/or changes in behaviorsThere is evidence that the research has contributed to positive change in the problem context and/or innovations that have positive social or environmental impacts.
CriteriaDefinitionRubric scale
Clearly defined socio-ecological contextThe context is well defined and described and analyzed sufficiently to identify research entry points.The context is well defined, described, and analyzed sufficiently to identify research entry points.
Socially relevant research problem Research problem is relevant to the problem context. The research problem is defined and framed in a way that clearly shows its relevance to the context and that demonstrates that consideration has been given to the practical application of research activities and outputs.
Engagement with problem context Researchers demonstrate appropriate breadth and depth of understanding of and sufficient interaction with the problem context. The documentation demonstrates that the researcher/team has interacted appropriately and sufficiently with the problem context to understand it and to have potential to influence it (e.g. through site visits, meeting participation, discussion with stakeholders, document review) in planning and implementing the research.
Explicit theory of changeThe research explicitly identifies its main intended outcomes and how they are intended/expected to be realized and to contribute to longer-term outcomes and/or impacts.The research explicitly identifies its main intended outcomes and how they are intended/expected to be realized and to contribute to longer-term outcomes and/or impacts.
Relevant research objectives and designThe research objectives and design are relevant, timely, and appropriate to the problem context, including attention to stakeholder needs and values.The documentation clearly demonstrates, through sufficient analysis of key factors, needs, and complexity within the context, that the research objectives and design are relevant and appropriate.
Appropriate project implementationResearch execution is suitable to the problem context and the socially relevant research objectives.The documentation reflects effective project implementation that is appropriate to the context, with reflection and adaptation as needed.
Effective communication Communication during and after the research process is appropriate to the context and accessible to stakeholders, users, and other intended audiences The documentation indicates that the research project planned and achieved appropriate communications with all necessary actors during the research process.
Broad preparationThe research is based on a strong integrated theoretical and empirical foundation that is relevant to the context.The documentation demonstrates critical understanding of an appropriate breadth and depth of literature and theory from across disciplines relevant to the context, and of the context itself
Clear research problem definitionThe research problem is clearly defined, researchable, grounded in the academic literature, and relevant to the context.The research problem is clearly stated and defined, researchable, and grounded in the academic literature and the problem context.
Objectives stated and metResearch objectives are clearly stated.The research objectives are clearly stated, logically and appropriately related to the context and the research problem, and achieved, with any necessary adaptation explained.
Feasible research projectThe research design and resources are appropriate and sufficient to meet the objectives as stated, and sufficiently resilient to adapt to unexpected opportunities and challenges throughout the research process.The research design and resources are appropriate and sufficient to meet the objectives as stated, and sufficiently resilient to adapt to unexpected opportunities and challenges throughout the research process.
Adequate competenciesThe skills and competencies of the researcher/team/collaboration (including academic and societal actors) are sufficient and in appropriate balance (without unnecessary complexity) to succeed.The documentation recognizes the limitations and biases of individuals’ knowledge and identifies the knowledge, skills, and expertise needed to carry out the research and provides evidence that they are represented in the research team in the appropriate measure to address the problem.
Research approach fits purposeDisciplines, perspectives, epistemologies, approaches, and theories are combined appropriately to create an approach that is appropriate to the research problem and the objectivesThe documentation explicitly states the rationale for the inclusion and integration of different epistemologies, disciplines, and methodologies, justifies the approach taken in reference to the context, and discusses the process of integration, including how paradoxes and conflicts were managed.
Appropriate methodsMethods are fit to purpose and well-suited to answering the research questions and achieving the objectives.Methods are clearly described, and documentation demonstrates that the methods are fit to purpose, systematic yet adaptable, and transparent. Novel (unproven) methods or adaptations are justified and explained, including why they were used and how they maintain scientific rigor.
Clearly presented argumentThe movement from analysis through interpretation to conclusions is transparently and logically described. Sufficient evidence is provided to clearly demonstrate the relationship between evidence and conclusions.Results are clearly presented. Analyses and interpretations are adequately explained, with clearly described terminology and full exposition of the logic leading to conclusions, including exploration of possible alternate explanations.
Transferability/generalizability of research findingsAppropriate and rigorous methods ensure the study’s findings are externally valid (generalizable). In some cases, findings may be too context specific to be generalizable in which case research would be judged on its ability to act as a model for future research.Document clearly explains how the research findings are transferable to other contexts OR, in cases that are too context-specific to be generalizable, discusses aspects of the research process or findings that may be transferable to other contexts and/or used as learning cases.
Limitations statedResearchers engage in ongoing individual and collective reflection in order to explicitly acknowledge and address limitations.Limitations are clearly stated and adequately accounted for on an ongoing basis through the research project.
Ongoing monitoring and reflexivity Researchers engage in ongoing reflection and adaptation of the research process, making changes as new obstacles, opportunities, circumstances, and/or knowledge surface.Processes of reflection, individually and as a research team, are clearly documented throughout the research process along with clear descriptions and justifications for any changes to the research process made as a result of reflection.
Disclosure of perspectiveActual, perceived, and potential bias is clearly stated and accounted for. This includes aspects of: researchers’ position, sources of support, financing, collaborations, partnerships, research mandate, assumptions, goals, and bounds placed on commissioned research.The documentation identifies potential or actual bias, including aspects of researchers’ positions, sources of support, financing, collaborations, partnerships, research mandate, assumptions, goals, and bounds placed on commissioned research.
Effective collaborationAppropriate processes are in place to ensure effective collaboration (e.g. clear and explicit roles and responsibilities agreed upon, transparent and appropriate decision-making structures)The documentation explicitly discusses the collaboration process, with adequate demonstration that the opportunities and process for collaboration are appropriate to the context and the actors involved (e.g. clear and explicit roles and responsibilities agreed upon, transparent and appropriate decision-making structures)
Genuine and explicit inclusionInclusion of diverse actors in the research process is clearly defined. Representation of actors' perspectives, values, and unique contexts is ensured through adequate planning, explicit agreements, communal reflection, and reflexivity.The documentation explains the range of participants and perspectives/cultural backgrounds involved, clearly describes what steps were taken to ensure the respectful inclusion of diverse actors/views, and explains the roles and contributions of all participants in the research process.
Research is ethicalResearch adheres to standards of ethical conduct.The documentation describes the ethical review process followed and, considering the full range of stakeholders, explicitly identifies any ethical challenges and how they were resolved.
Research builds social capacityChange takes place in individuals, groups, and at the institutional level through shared learning. This can manifest as a change in knowledge, understanding, and/or perspective of participants in the research project. There is evidence of observed changes in knowledge, behavior, understanding, and/or perspectives of research participants and/or stakeholders as a result of the research process and/or findings.
Contribution to knowledgeResearch contributes to knowledge and understanding in academic and social realms in a timely, relevant, and significant way.There is evidence that knowledge created through the project is being/has been used by intended audiences and end-users.
Practical applicationResearch has a practical application. The findings, process, and/or products of research are used.There is evidence that innovations developed through the research and/or the research process have been (or will be applied) in the real world.
Significant outcomeResearch contributes to the solution of the targeted problem or provides unexpected solutions to other problems. This can include a variety of outcomes: building societal capacity, learning, use of research products, and/or changes in behaviorsThere is evidence that the research has contributed to positive change in the problem context and/or innovations that have positive social or environmental impacts.

a Research problems are the particular topic, area of concern, question to be addressed, challenge, opportunity, or focus of the research activity. Research problems are related to the societal problem but take on a specific focus, or framing, within a societal problem.

b Problem context refers to the social and environmental setting(s) that gives rise to the research problem, including aspects of: location; culture; scale in time and space; social, political, economic, and ecological/environmental conditions; resources and societal capacity available; uncertainty, complexity, and novelty associated with the societal problem; and the extent of agency that is held by stakeholders ( Carew and Wickson 2010 ).

c Words such as ‘appropriate’, ‘suitable’, and ‘adequate’ are used deliberately to allow for quality criteria to be flexible and specific enough to the needs of individual research projects ( Oberg 2008 ).

d Research process refers to the series of decisions made and actions taken throughout the entire duration of the research project and encompassing all aspects of the research project.

e Reflexivity refers to an iterative process of formative, critical reflection on the important interactions and relationships between a research project’s process, context, and product(s).

f In an ex ante evaluation, ‘evidence of’ would be replaced with ‘potential for’.

There is a strong trend in the reviewed articles to recognize the need for appropriate measures of scientific quality (usually adapted from disciplinary antecedants), but also to consider broader sets of criteria regarding the societal significance and applicability of research, and the need for engagement and representation of stakeholder values and knowledge. Cash et al. (2002) nicely conceptualize three key aspects of effective sustainability research as: salience (or relevance), credibility, and legitimacy. These are presented as necessary attributes for research to successfully produce transferable, useful information that can cross boundaries between disciplines, across scales, and between science and society. Many of the papers also refer to the principle that high-quality TDR should be effective in terms of contributing to the solution of problems. These four principles are discussed in the following sections.

4.1.1 Relevance

Relevance is the importance, significance, and usefulness of the research project's objectives, process, and findings to the problem context and to society. This includes the appropriateness of the timing of the research, the questions being asked, the outputs, and the scale of the research in relation to the societal problem being addressed. Good-quality TDR addresses important social/environmental problems and produces knowledge that is useful for decision making and problem solving ( Cash et al. 2002 ; Klein 2006 ). As Erno-Kjolhede and Hansson ( 2011 : 140) explain, quality ‘is first and foremost about creating results that are applicable and relevant for the users of the research’. Researchers must demonstrate an in-depth knowledge of and ongoing engagement with the problem context in which their research takes place ( Wickson, Carew and Russell 2006 ; Stige, Malterud and Midtgarden 2009 ; Mitchell and Willets 2009 ). From the early steps of problem formulation and research design through to the appropriate and effective communication of research findings, the applicability and relevance of the research to the societal problem must be explicitly stated and incorporated.

4.1.2 Credibility

Credibility refers to whether or not the research findings are robust and the knowledge produced is scientifically trustworthy. This includes clear demonstration that the data are adequate, with well-presented methods and logical interpretations of findings. High-quality research is authoritative, transparent, defensible, believable, and rigorous. This is the traditional purview of science, and traditional disciplinary criteria can be applied in TDR evaluation to an extent. Additional and modified criteria are needed to address the integration of epistemologies and methodologies and the development of novel methods through collaboration, the broad preparation and competencies required to carry out the research, and the need for reflection and adaptation when operating in complex systems. Having researchers actively engaged in the problem context and including extra-scientific actors as part of the research process helps to achieve relevance and legitimacy of the research; it also adds complexity and heightened requirements of transparency, reflection, and reflexivity to ensure objective, credible research is carried out.

Active reflexivity is a criterion of credibility of TDR that may seem to contradict more rigid disciplinary methodological traditions ( Carew and Wickson 2010 ). Practitioners of TDR recognize that credible work in these problem-oriented fields requires active reflexivity, epitomized by ongoing learning, flexibility, and adaptation to ensure the research approach and objectives remain relevant and fit-to-purpose ( Lincoln 1995 ; Bergmann et al. 2005 ; Wickson, Carew and Russell 2006 ; Mitchell and Willets 2009 ; Andreén 2010 ; Carew and Wickson 2010 ; Wickson and Carew 2014 ). Changes made during the research process must be justified and reported transparently and explicitly to maintain credibility.

The need for critical reflection on potential bias and limitations becomes more important to maintain credibility of research-in-context ( Lincoln 1995 ; Bergmann et al. 2005 ; Mitchell and Willets 2009 ; Stige, Malterud and Midtgarden 2009 ). Transdisciplinary researchers must ensure they maintain a high level of objectivity and transparency while actively engaging in the problem context. This point demonstrates the fine balance between different aspects of quality, in this case relevance and credibility, and the need to be aware of tensions and to seek complementarities ( Cash et al. 2002 ).

4.1.3 Legitimacy

Legitimacy refers to whether the research process is perceived as fair and ethical by end-users. In other words, is it acceptable and trustworthy in the eyes of those who will use it? This requires the appropriate inclusion and consideration of diverse values, interests, and the ethical and fair representation of all involved. Legitimacy may be achieved in part through the genuine inclusion of stakeholders in the research process. Whereas credibility refers to technical aspects of sound research, legitimacy deals with sociopolitical aspects of the knowledge production process and products of research. Do stakeholders trust the researchers and the research process, including funding sources and other sources of potential bias? Do they feel represented? Legitimate TDR ‘considers appropriate values, concerns, and perspectives of different actors’ ( Cash et al. 2002 : 2) and incorporates these perspectives into the research process through collaboration and mutual learning ( Bergmann et al. 2005 ; Chataway, Smith and Wield 2007 ; Andrén 2010 ; Huutoneimi 2010 ). A fair and ethical process is important to uphold standards of quality in all research. However, there are additional considerations that are unique to TDR.

Because TDR happens in-context and often in collaboration with societal actors, the disclosure of researcher perspective and a transparent statement of all partnerships, financing, and collaboration is vital to ensure an unbiased research process ( Lincoln 1995 ; Defila and Di Giulio 1999 ; Boaz and Ashby 2003 ; Barker and Pistrang 2005 ; Bergmann et al. 2005 ). The disclosure of perspective has both internal and external aspects, on one hand ensuring the researchers themselves explicitly reflect on and account for their own position, potential sources of bias, and limitations throughout the process, and on the other hand making the process transparent to those external to the research group who can then judge the legitimacy based on their perspective of fairness ( Cash et al. 2002 ).

TDR includes the engagement of societal actors along a continuum of participation from consultation to co-creation of knowledge ( Brandt et al. 2013 ). Regardless of the depth of participation, all processes that engage societal actors must ensure that inclusion/engagement is genuine, roles are explicit, and processes for effective and fair collaboration are present ( Bergmann et al. 2005 ; Wickson, Carew and Russell 2006 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Hellstrom 2012 ). Important considerations include: the accurate representation of those involved; explicit and agreed-upon roles and contributions of actors; and adequate planning and procedures to ensure all values, perspectives, and contexts are adequately and appropriately incorporated. Mitchell and Willets (2009) consider cultural competence as a key criterion that can support researchers in navigating diverse epistemological perspectives. This is similar to what Morrow terms ‘social validity’, a criterion that asks researchers to be responsive to and critically aware of the diversity of perspectives and cultures influenced by their research. Several authors highlight that in order to develop this critical awareness of the diversity of cultural paradigms that operate within a problem situation, researchers should practice responsive, critical, and/or communal reflection ( Bergmann et al. 2005 ; Wickson, Carew and Russell 2006 ; Mitchell and Willets 2009 ; Carew and Wickson 2010 ). Reflection and adaptation are important quality criteria that cut across multiple principles and facilitate learning throughout the process, which is a key foundation to TD inquiry.

4.1.4 Effectiveness

We define effective research as research that contributes to positive change in the social, economic, and/or environmental problem context. Transdisciplinary inquiry is rooted in the objective of solving real-word problems ( Klein 2008 ; Carew and Wickson 2010 ) and must have the potential to ( ex ante ) or actually ( ex post ) make a difference if it is to be considered of high quality ( Erno-Kjolhede and Hansson 2011 ). Potential research effectiveness can be indicated and assessed at the proposal stage and during the research process through: a clear and stated intention to address and contribute to a societal problem, the establishment of the research process and objectives in relation to the problem context, and the continuous reflection on the usefulness of the research findings and products to the problem ( Bergmann et al. 2005 ; Lahtinen et al. 2005 ; de Jong et al. 2011 ).

Assessing research effectiveness ex post remains a major challenge, especially in complex transdisciplinary approaches. Conventional and widely used measures of ‘scientific impact’ count outputs such as journal articles and other publications and citations of those outputs (e.g. H index; i10 index). While these are useful indicators of scholarly influence, they are insufficient and inappropriate measures of research effectiveness where research aims to contribute to social learning and change. We need to also (or alternatively) focus on other kinds of research and scholarship outputs and outcomes and the social, economic, and environmental impacts that may result.

For many authors, contributing to learning and building of societal capacity are central goals of TDR ( Defila and Di Giulio 1999 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Carew and Wickson 2010 ; Erno-Kjolhede and Hansson 2011 ; Hellstrom 2011 ), and so are considered part of TDR effectiveness. Learning can be characterized as changes in knowledge, attitudes, or skills and can be assessed directly, or through observed behavioral changes and network and relationship development. Some evaluation methodologies (e.g. Outcome Mapping ( Earl, Carden and Smutylo 2001 )) specifically measure these kinds of changes. Other evaluation methodologies consider the role of research within complex systems and assess effectiveness in terms of contributions to changes in policy and practice and resulting social, economic, and environmental benefits ( ODI 2004 , 2012 ; White and Phillips 2012 ; Mayne et al. 2013 ).

4.2 TDR quality criteria

TDR quality criteria and their definitions (explicit or implicit) were extracted from each article and summarized in an Excel database. These criteria were classified into themes corresponding to the four principles identified above, sorted and refined to develop sets of criteria that are comprehensive, mutually exclusive, and representative of the ideas presented in the reviewed articles. Within each principle, the criteria are organized roughly in the sequence of a typical project cycle (e.g. with research design following problem identification and preceding implementation). Definitions of each criterion were developed to reflect the concepts found in the literature, tested and refined iteratively to improve clarity. Rubric statements were formulated based on the literature and our own experience.

The complete set of principles, criteria, and definitions is presented as the TDR Quality Assessment Framework ( Table 3 ).

4.3 Guidance on the application of the framework

4.3.1 timing.

Most criteria can be applied at each stage of the research process, ex ante , mid term, and ex post , using appropriate interpretations at each stage. Ex ante (i.e. proposal) assessment should focus on a project’s explicitly stated intentions and approaches to address the criteria. Mid-term indicators will focus on the research process and whether or not it is being implemented in a way that will satisfy the criteria. Ex post assessment should consider whether the research has been done appropriately for the purpose and that the desired results have been achieved.

4.3.2 New meanings for familiar terms

Many of the terms used in the framework are extensions of disciplinary criteria and share the same or similar names and perhaps similar but nuanced meaning. The principles and criteria used here extend beyond disciplinary antecedents and include new concepts and understandings that encapsulate the unique characteristics and needs of TDR and allow for evaluation and definition of quality in TDR. This is especially true in the criteria related to credibility. These criteria are analogous to traditional disciplinary criteria, but with much stronger emphasis on grounding in both the scientific and the social/environmental contexts. We urge readers to pay close attention to the definitions provided in Table 3 as well as the detailed descriptions of the principles in Section 4.1.

4.3.3 Using the framework

The TDR quality framework ( Table 3 ) is designed to be used to assess TDR research according to a project’s purpose; i.e. the criteria must be interpreted with respect to the context and goals of an individual research activity. The framework ( Table 3 ) lists the main criteria synthesized from the literature and our experience, organized within the principles of relevance, credibility, legitimacy, and effectiveness. The table presents the criteria within each principle, ordered to approximate a typical process of identifying a research problem and designing and implementing research. We recognize that the actual process in any given project will be iterative and will not necessarily follow this sequence, but this provides a logical flow. A concise definition is provided in the second column to explain each criterion. We then provide a rubric statement in the third column, phrased to be applied when the research has been completed. In most cases, the same statement can be used at the proposal stage with a simple tense change or other minor grammatical revision, except for the criteria relating to effectiveness. As discussed above, assessing effectiveness in terms of outcomes and/or impact requires evaluation research. At the proposal stage, it is only possible to assess potential effectiveness.

Many rubrics offer a set of statements for each criterion that represent progressively higher levels of achievement; the evaluator is asked to select the best match. In practice, this often results in vague and relative statements of merit that are difficult to apply. We have opted to present a single rubric statement in absolute terms for each criterion. The assessor can then rank how well a project satisfies each criterion using a simple three-point Likert scale. If a project fully satisfies a criterion—that is, if there is evidence that the criterion has been addressed in a way that is coherent, explicit, sufficient, and convincing—it should be ranked as a 2 for that criterion. A score of 2 means that the evaluator is persuaded that the project addressed that criterion in an intentional, appropriate, explicit, and thorough way. A score of 1 would be given when there is some evidence that the criterion was considered, but it is lacking completion, intention, and/or is not addressed satisfactorily. For example, a score of 1 would be given when a criterion is explicitly discussed but poorly addressed, or when there is some indication that the criterion has been considered and partially addressed but it has not been treated explicitly, thoroughly, or adequately. A score of 0 indicates that there is no evidence that the criterion was addressed or that it was addressed in a way that was misguided or inappropriate.

It is critical that the evaluation be done in context, keeping in mind the purpose, objectives, and resources of the project, as well as other contextual information, such as the intended purpose of grant funding or relevant partnerships. Each project will be unique in its complexities; what is sufficient or adequate in one criterion for one research project may be insufficient or inappropriate for another. Words such as ‘appropriate’, ‘suitable’, and ‘adequate’ are used deliberately to encourage application of criteria to suit the needs of individual research projects ( Oberg 2008 ). Evaluators must consider the objectives of the research project and the problem context within which it is carried out as the benchmark for evaluation. For example, we tested the framework with RRU masters theses. These are typically small projects with limited scope, carried out by a single researcher. Expectations for ‘effective communication’ or ‘competencies’ or ‘effective collaboration’ are much different in these kinds of projects than in a multi-year, multi-partner CIFOR project. All criteria should be evaluated through the lens of the stated research objectives, research goals, and context.

The systematic review identified relevant articles from a diverse literature that have a strong central focus. Collectively, they highlight the complexity of contemporary social and environmental problems and emphasize that addressing such issues requires combinations of new knowledge and innovation, action, and engagement. Traditional disciplinary research has often failed to provide solutions because it cannot adequately cope with complexity. New forms of research are proliferating, crossing disciplinary and academic boundaries, integrating methodologies, and engaging a broader range of research participants, as a way to make research more relevant and effective. Theoretically, such approaches appear to offer great potential to contribute to transformative change. However, because these approaches are new and because they are multidimensional, complex, and often unique, it has been difficult to know what works, how, and why. In the absence of the kinds of methodological and quality standards that guide disciplinary research, there are no generally agreed criteria for evaluating such research.

Criteria are needed to guide and to help ensure that TDR is of high quality, to inform the teaching and learning of new researchers, and to encourage and support the further development of transdisciplinary approaches. The lack of a standard and broadly applicable framework for the evaluation of quality in TDR is perceived to cause an implicit or explicit devaluation of high-quality TDR or may prevent quality TDR from being done. There is a demonstrated need for an operationalized understanding of quality that addresses the characteristics, contributions, and challenges of TDR. The reviewed articles approach the topic from different perspectives and fields of study, using different terminology for similar concepts, or the same terminology for different concepts, and with unique ways of organizing and categorizing the dimensions and quality criteria. We have synthesized and organized these concepts as key TDR principles and criteria in a TDR Quality Framework, presented as an evaluation rubric. We have tested the framework on a set of masters’ theses and found it to be broadly applicable, usable, and useful for analyzing individual projects and for comparing projects within the set. We anticipate that further testing with a wider range of projects will help further refine and improve the definitions and rubric statements. We found that the three-point Likert scale (0–2) offered sufficient variability for our purposes, and rating is less subjective than with relative rubric statements. It may be possible to increase the rating precision with more points on the scale to increase the sensitivity for comparison purposes, for example in a review of proposals for a particular grant application.

Many of the articles we reviewed emphasize the importance of the evaluation process itself. The formative, developmental role of evaluation in TDR is seen as essential to the goals of mutual learning as well as to ensure that research remains responsive and adaptive to the problem context. In order to adequately evaluate quality in TDR, the process, including who carries out the evaluations, when, and in what manner, must be revised to be suitable to the unique characteristics and objectives of TDR. We offer this review and synthesis, along with a proposed TDR quality evaluation framework, as a contribution to an important conversation. We hope that it will be useful to researchers and research managers to help guide research design, implementation and reporting, and to the community of research organizations, funders, and society at large. As underscored in the literature review, there is a need for an adapted research evaluation process that will help advance problem-oriented research in complex systems, ultimately to improve research effectiveness.

This work was supported by funding from the Canada Research Chairs program. Funding support from the Canadian Social Sciences and Humanities Research Council (SSHRC) and technical support from the Evidence Based Forestry Initiative of the Centre for International Forestry Research (CIFOR), funded by UK DfID are also gratefully acknowledged.

Supplementary data is available here

The authors thank Barbara Livoreil and Stephen Dovers for valuable comments and suggestions on the protocol and Gillian Petrokofsky for her review of the protocol and a draft version of the manuscript. Two anonymous reviewers and the editor provided insightful critique and suggestions in two rounds that have helped to substantially improve the article.

Conflict of interest statement . None declared.

1. ‘Stakeholders’ refers to individuals and groups of societal actors who have an interest in the issue or problem that the research seeks to address.

2. The terms ‘quality’ and ‘excellence’ are often used in the literature with similar meaning. Technically, ‘excellence’ is a relative concept, referring to the superiority of a thing compared to other things of its kind. Quality is an attribute or a set of attributes of a thing. We are interested in what these attributes are or should be in high-quality research. Therefore, the term ‘quality’ is used in this discussion.

3. The terms ‘science’ and ‘research’ are not always clearly distinguished in the literature. We take the position that ‘science’ is a more restrictive term that is properly applied to systematic investigations using the scientific method. ‘Research’ is a broader term for systematic investigations using a range of methods, including but not restricted to the scientific method. We use the term ‘research’ in this broad sense.

Aagaard-Hansen J. Svedin U. ( 2009 ) ‘Quality Issues in Cross-disciplinary Research: Towards a Two-pronged Approach to Evaluation’ , Social Epistemology , 23 / 2 : 165 – 76 . DOI: 10.1080/02691720902992323

Google Scholar

Andrén S. ( 2010 ) ‘A Transdisciplinary, Participatory and Action-Oriented Research Approach: Sounds Nice but What do you Mean?’ [unpublished working paper] Human Ecology Division: Lund University, 1–21. < https://lup.lub.lu.se/search/publication/1744256 >

Australian Research Council (ARC) ( 2012 ) ERA 2012 Evaluation Handbook: Excellence in Research for Australia . Australia : ARC . < http://www.arc.gov.au/pdf/era12/ERA%202012%20Evaluation%20Handbook_final%20for%20web_protected.pdf >

Google Preview

Balsiger P. W. ( 2004 ) ‘Supradisciplinary Research Practices: History, Objectives and Rationale’ , Futures , 36 / 4 : 407 – 21 .

Bantilan M. C. et al.  . ( 2004 ) ‘Dealing with Diversity in Scientific Outputs: Implications for International Research Evaluation’ , Research Evaluation , 13 / 2 : 87 – 93 .

Barker C. Pistrang N. ( 2005 ) ‘Quality Criteria under Methodological Pluralism: Implications for Conducting and Evaluating Research’ , American Journal of Community Psychology , 35 / 3-4 : 201 – 12 .

Bergmann M. et al.  . ( 2005 ) Quality Criteria of Transdisciplinary Research: A Guide for the Formative Evaluation of Research Projects . Central report of Evalunet – Evaluation Network for Transdisciplinary Research. Frankfurt am Main, Germany: Institute for Social-Ecological Research. < http://www.isoe.de/ftp/evalunet_guide.pdf >

Boaz A. Ashby D. ( 2003 ) Fit for Purpose? Assessing Research Quality for Evidence Based Policy and Practice .

Boix-Mansilla V. ( 2006a ) ‘Symptoms of Quality: Assessing Expert Interdisciplinary Work at the Frontier: An Empirical Exploration’ , Research Evaluation , 15 / 1 : 17 – 29 .

Boix-Mansilla V. . ( 2006b ) ‘Conference Report: Quality Assessment in Interdisciplinary Research and Education’ , Research Evaluation , 15 / 1 : 69 – 74 .

Bornmann L. ( 2013 ) ‘What is Societal Impact of Research and How can it be Assessed? A Literature Survey’ , Journal of the American Society for Information Science and Technology , 64 / 2 : 217 – 33 .

Brandt P. et al.  . ( 2013 ) ‘A Review of Transdisciplinary Research in Sustainability Science’ , Ecological Economics , 92 : 1 – 15 .

Cash D. Clark W.C. Alcock F. Dickson N. M. Eckley N. Jäger J . ( 2002 ) Salience, Credibility, Legitimacy and Boundaries: Linking Research, Assessment and Decision Making (November 2002). KSG Working Papers Series RWP02-046. Available at SSRN: http://ssrn.com/abstract=372280 .

Carew A. L. Wickson F. ( 2010 ) ‘The TD Wheel: A Heuristic to Shape, Support and Evaluate Transdisciplinary Research’ , Futures , 42 / 10 : 1146 – 55 .

Collaboration for Environmental Evidence (CEE) . ( 2013 ) Guidelines for Systematic Review and Evidence Synthesis in Environmental Management . Version 4.2. Environmental Evidence < www.environmentalevidence.org/Documents/Guidelines/Guidelines4.2.pdf >

Chandler J. ( 2014 ) Methods Research and Review Development Framework: Policy, Structure, and Process . < http://methods.cochrane.org/projects-developments/research >

Chataway J. Smith J. Wield D. ( 2007 ) ‘Shaping Scientific Excellence in Agricultural Research’ , International Journal of Biotechnology 9 / 2 : 172 – 87 .

Clark W. C. Dickson N. ( 2003 ) ‘Sustainability Science: The Emerging Research Program’ , PNAS 100 / 14 : 8059 – 61 .

Consultative Group on International Agricultural Research (CGIAR) ( 2011 ) A Strategy and Results Framework for the CGIAR . < http://library.cgiar.org/bitstream/handle/10947/2608/Strategy_and_Results_Framework.pdf?sequence=4 >

Cloete N. ( 1997 ) ‘Quality: Conceptions, Contestations and Comments’, African Regional Consultation Preparatory to the World Conference on Higher Education , Dakar, Senegal, 1-4 April 1997 .

Defila R. DiGiulio A. ( 1999 ) ‘Evaluating Transdisciplinary Research,’ Panorama: Swiss National Science Foundation Newsletter , 1 : 4 – 27 . < www.ikaoe.unibe.ch/forschung/ip/Specialissue.Pano.1.99.pdf >

Donovan C. ( 2008 ) ‘The Australian Research Quality Framework: A Live Experiment in Capturing the Social, Economic, Environmental, and Cultural Returns of Publicly Funded Research. Reforming the Evaluation of Research’ , New Directions for Evaluation , 118 : 47 – 60 .

Earl S. Carden F. Smutylo T. ( 2001 ) Outcome Mapping. Building Learning and Reflection into Development Programs . Ottawa, ON : International Development Research Center .

Ernø-Kjølhede E. Hansson F. ( 2011 ) ‘Measuring Research Performance during a Changing Relationship between Science and Society’ , Research Evaluation , 20 / 2 : 130 – 42 .

Feller I. ( 2006 ) ‘Assessing Quality: Multiple Actors, Multiple Settings, Multiple Criteria: Issues in Assessing Interdisciplinary Research’ , Research Evaluation 15 / 1 : 5 – 15 .

Gaziulusoy A. İ. Boyle C. ( 2013 ) ‘Proposing a Heuristic Reflective Tool for Reviewing Literature in Transdisciplinary Research for Sustainability’ , Journal of Cleaner Production , 48 : 139 – 47 .

Gibbons M. et al.  . ( 1994 ) The New Production of Knowledge: The Dynamics of Science and Research in Contemporary Societies . London : Sage Publications .

Hellstrom T. ( 2011 ) ‘Homing in on Excellence: Dimensions of Appraisal in Center of Excellence Program Evaluations’ , Evaluation , 17 / 2 : 117 – 31 .

Hellstrom T. . ( 2012 ) ‘Epistemic Capacity in Research Environments: A Framework for Process Evaluation’ , Prometheus , 30 / 4 : 395 – 409 .

Hemlin S. Rasmussen S. B . ( 2006 ) ‘The Shift in Academic Quality Control’ , Science, Technology & Human Values , 31 / 2 : 173 – 98 .

Hessels L. K. Van Lente H. ( 2008 ) ‘Re-thinking New Knowledge Production: A Literature Review and a Research Agenda’ , Research Policy , 37 / 4 , 740 – 60 .

Huutoniemi K. ( 2010 ) ‘Evaluating Interdisciplinary Research’ , in Frodeman R. Klein J. T. Mitcham C. (eds) The Oxford Handbook of Interdisciplinarity , pp. 309 – 20 . Oxford : Oxford University Press .

de Jong S. P. L. et al.  . ( 2011 ) ‘Evaluation of Research in Context: An Approach and Two Cases’ , Research Evaluation , 20 / 1 : 61 – 72 .

Jahn T. Keil F. ( 2015 ) ‘An Actor-Specific Guideline for Quality Assurance in Transdisciplinary Research’ , Futures , 65 : 195 – 208 .

Kates R. ( 2000 ) ‘Sustainability Science’ , World Academies Conference Transition to Sustainability in the 21st Century 5/18/00 , Tokyo, Japan .

Klein J. T . ( 2006 ) ‘Afterword: The Emergent Literature on Interdisciplinary and Transdisciplinary Research Evaluation’ , Research Evaluation , 15 / 1 : 75 – 80 .

Klein J. T . ( 2008 ) ‘Evaluation of Interdisciplinary and Transdisciplinary Research: A Literature Review’ , American Journal of Preventive Medicine , 35 / 2 Supplment S116–23. DOI: 10.1016/j.amepre.2008.05.010

Royal Netherlands Academy of Arts and Sciences, Association of Universities in the Netherlands, Netherlands Organization for Scientific Research (KNAW) . ( 2009 ) Standard Evaluation Protocol 2009-2015: Protocol for Research Assessment in the Netherlands . Netherlands : KNAW . < www.knaw.nl/sep >

Komiyama H. Takeuchi K. ( 2006 ) ‘Sustainability Science: Building a New Discipline’ , Sustainability Science , 1 : 1 – 6 .

Lahtinen E. et al.  . ( 2005 ) ‘The Development of Quality Criteria For Research: A Finnish approach’ , Health Promotion International , 20 / 3 : 306 – 15 .

Lang D. J. et al.  . ( 2012 ) ‘Transdisciplinary Research in Sustainability Science: Practice , Principles , and Challenges’, Sustainability Science , 7 / S1 : 25 – 43 .

Lincoln Y. S . ( 1995 ) ‘Emerging Criteria for Quality in Qualitative and Interpretive Research’ , Qualitative Inquiry , 1 / 3 : 275 – 89 .

Mayne J. Stern E. ( 2013 ) Impact Evaluation of Natural Resource Management Research Programs: A Broader View . Australian Centre for International Agricultural Research, Canberra .

Meyrick J . ( 2006 ) ‘What is Good Qualitative Research? A First Step Towards a Comprehensive Approach to Judging Rigour/Quality’ , Journal of Health Psychology , 11 / 5 : 799 – 808 .

Mitchell C. A. Willetts J. R. ( 2009 ) ‘Quality Criteria for Inter and Trans - Disciplinary Doctoral Research Outcomes’ , in Prepared for ALTC Fellowship: Zen and the Art of Transdisciplinary Postgraduate Studies ., Sydney : Institute for Sustainable Futures, University of Technology .

Morrow S. L . ( 2005 ) ‘Quality and Trustworthiness in Qualitative Research in Counseling Psychology’ , Journal of Counseling Psychology , 52 / 2 : 250 – 60 .

Nowotny H. Scott P. Gibbons M. ( 2001 ) Re-Thinking Science . Cambridge : Polity .

Nowotny H. Scott P. Gibbons M. . ( 2003 ) ‘‘Mode 2’ Revisited: The New Production of Knowledge’ , Minerva , 41 : 179 – 94 .

Öberg G . ( 2008 ) ‘Facilitating Interdisciplinary Work: Using Quality Assessment to Create Common Ground’ , Higher Education , 57 / 4 : 405 – 15 .

Ozga J . ( 2007 ) ‘Co - production of Quality in the Applied Education Research Scheme’ , Research Papers in Education , 22 / 2 : 169 – 81 .

Ozga J . ( 2008 ) ‘Governing Knowledge: research steering and research quality’ , European Educational Research Journal , 7 / 3 : 261 – 272 .

OECD ( 2012 ) Frascati Manual 6th ed. < http://www.oecd.org/innovation/inno/frascatimanualproposedstandardpracticeforsurveysonresearchandexperimentaldevelopment6thedition >

Overseas Development Institute (ODI) ( 2004 ) ‘Bridging Research and Policy in International Development: An Analytical and Practical Framework’, ODI Briefing Paper. < http://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/198.pdf >

Overseas Development Institute (ODI) . ( 2012 ) RAPID Outcome Assessment Guide . < http://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/7815.pdf >

Pullin A. S. Stewart G. B. ( 2006 ) ‘Guidelines for Systematic Review in Conservation and Environmental Management’ , Conservation Biology , 20 / 6 : 1647 – 56 .

Research Excellence Framework (REF) . ( 2011 ) Research Excellence Framework 2014: Assessment Framework and Guidance on Submissions. Reference REF 02.2011. UK: REF. < http://www.ref.ac.uk/pubs/2011-02/ >

Scott A . ( 2007 ) ‘Peer Review and the Relevance of Science’ , Futures , 39 / 7 : 827 – 45 .

Spaapen J. Dijstelbloem H. Wamelink F. ( 2007 ) Evaluating Research in Context: A Method for Comprehensive Assessment . Netherlands: Consultative Committee of Sector Councils for Research and Development. < http://www.qs.univie.ac.at/fileadmin/user_upload/qualitaetssicherung/PDF/Weitere_Aktivit%C3%A4ten/Eric.pdf >

Spaapen J. Van Drooge L. ( 2011 ) ‘Introducing “Productive Interactions” in Social Impact Assessment’ , Research Evaluation , 20 : 211 – 18 .

Stige B. Malterud K. Midtgarden T. ( 2009 ) ‘Toward an Agenda for Evaluation of Qualitative Research’ , Qualitative Health Research , 19 / 10 : 1504 – 16 .

td-net ( 2014 ) td-net. < www.transdisciplinarity.ch/e/Bibliography/new.php >

Tertiary Education Commission (TEC) . ( 2012 ) Performance-based Research Fund: Quality Evaluation Guidelines 2012. New Zealand: TEC. < http://www.tec.govt.nz/Documents/Publications/PBRF-Quality-Evaluation-Guidelines-2012.pdf >

Tijssen R. J. W. ( 2003 ) ‘Quality Assurance: Scoreboards of Research Excellence’ , Research Evaluation , 12 : 91 – 103 .

White H. Phillips D. ( 2012 ) ‘Addressing Attribution of Cause and Effect in Small n Impact Evaluations: Towards an Integrated Framework’. Working Paper 15. New Delhi: International Initiative for Impact Evaluation .

Wickson F. Carew A. ( 2014 ) ‘Quality Criteria and Indicators for Responsible Research and Innovation: Learning from Transdisciplinarity’ , Journal of Responsible Innovation , 1 / 3 : 254 – 73 .

Wickson F. Carew A. Russell A. W. ( 2006 ) ‘Transdisciplinary Research: Characteristics, Quandaries and Quality,’ Futures , 38 / 9 : 1046 – 59

Month: Total Views:
November 2016 7
December 2016 36
January 2017 51
February 2017 109
March 2017 124
April 2017 72
May 2017 45
June 2017 30
July 2017 70
August 2017 84
September 2017 114
October 2017 76
November 2017 81
December 2017 320
January 2018 522
February 2018 326
March 2018 518
April 2018 661
May 2018 652
June 2018 463
July 2018 411
August 2018 528
September 2018 537
October 2018 361
November 2018 420
December 2018 344
January 2019 374
February 2019 465
March 2019 610
April 2019 456
May 2019 418
June 2019 437
July 2019 346
August 2019 377
September 2019 451
October 2019 376
November 2019 392
December 2019 326
January 2020 436
February 2020 383
March 2020 691
April 2020 444
May 2020 316
June 2020 435
July 2020 376
August 2020 379
September 2020 625
October 2020 443
November 2020 329
December 2020 356
January 2021 418
February 2021 402
March 2021 648
April 2021 519
May 2021 487
June 2021 435
July 2021 449
August 2021 421
September 2021 658
October 2021 537
November 2021 444
December 2021 379
January 2022 428
February 2022 534
March 2022 603
April 2022 688
May 2022 551
June 2022 366
July 2022 375
August 2022 497
September 2022 445
October 2022 457
November 2022 374
December 2022 303
January 2023 364
February 2023 327
March 2023 499
April 2023 404
May 2023 335
June 2023 350
July 2023 340
August 2023 419
September 2023 444
October 2023 595
November 2023 585
December 2023 498
January 2024 691
February 2024 728
March 2024 667
April 2024 611
May 2024 422
June 2024 382
July 2024 377
August 2024 318

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1471-5449
  • Print ISSN 0958-2029
  • Copyright © 2024 Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Criteria for Good Qualitative Research: A Comprehensive Review

  • Regular Article
  • Open access
  • Published: 18 September 2021
  • Volume 31 , pages 679–689, ( 2022 )

Cite this article

You have full access to this open access article

research paper about quality

  • Drishti Yadav   ORCID: orcid.org/0000-0002-2974-0323 1  

100k Accesses

47 Citations

70 Altmetric

Explore all metrics

This review aims to synthesize a published set of evaluative criteria for good qualitative research. The aim is to shed light on existing standards for assessing the rigor of qualitative research encompassing a range of epistemological and ontological standpoints. Using a systematic search strategy, published journal articles that deliberate criteria for rigorous research were identified. Then, references of relevant articles were surveyed to find noteworthy, distinct, and well-defined pointers to good qualitative research. This review presents an investigative assessment of the pivotal features in qualitative research that can permit the readers to pass judgment on its quality and to condemn it as good research when objectively and adequately utilized. Overall, this review underlines the crux of qualitative research and accentuates the necessity to evaluate such research by the very tenets of its being. It also offers some prospects and recommendations to improve the quality of qualitative research. Based on the findings of this review, it is concluded that quality criteria are the aftereffect of socio-institutional procedures and existing paradigmatic conducts. Owing to the paradigmatic diversity of qualitative research, a single and specific set of quality criteria is neither feasible nor anticipated. Since qualitative research is not a cohesive discipline, researchers need to educate and familiarize themselves with applicable norms and decisive factors to evaluate qualitative research from within its theoretical and methodological framework of origin.

Similar content being viewed by others

research paper about quality

Good Qualitative Research: Opening up the Debate

Beyond qualitative/quantitative structuralism: the positivist qualitative research and the paradigmatic disclaimer.

research paper about quality

What is Qualitative in Research

Avoid common mistakes on your manuscript.

Introduction

“… It is important to regularly dialogue about what makes for good qualitative research” (Tracy, 2010 , p. 837)

To decide what represents good qualitative research is highly debatable. There are numerous methods that are contained within qualitative research and that are established on diverse philosophical perspectives. Bryman et al., ( 2008 , p. 262) suggest that “It is widely assumed that whereas quality criteria for quantitative research are well‐known and widely agreed, this is not the case for qualitative research.” Hence, the question “how to evaluate the quality of qualitative research” has been continuously debated. There are many areas of science and technology wherein these debates on the assessment of qualitative research have taken place. Examples include various areas of psychology: general psychology (Madill et al., 2000 ); counseling psychology (Morrow, 2005 ); and clinical psychology (Barker & Pistrang, 2005 ), and other disciplines of social sciences: social policy (Bryman et al., 2008 ); health research (Sparkes, 2001 ); business and management research (Johnson et al., 2006 ); information systems (Klein & Myers, 1999 ); and environmental studies (Reid & Gough, 2000 ). In the literature, these debates are enthused by the impression that the blanket application of criteria for good qualitative research developed around the positivist paradigm is improper. Such debates are based on the wide range of philosophical backgrounds within which qualitative research is conducted (e.g., Sandberg, 2000 ; Schwandt, 1996 ). The existence of methodological diversity led to the formulation of different sets of criteria applicable to qualitative research.

Among qualitative researchers, the dilemma of governing the measures to assess the quality of research is not a new phenomenon, especially when the virtuous triad of objectivity, reliability, and validity (Spencer et al., 2004 ) are not adequate. Occasionally, the criteria of quantitative research are used to evaluate qualitative research (Cohen & Crabtree, 2008 ; Lather, 2004 ). Indeed, Howe ( 2004 ) claims that the prevailing paradigm in educational research is scientifically based experimental research. Hypotheses and conjectures about the preeminence of quantitative research can weaken the worth and usefulness of qualitative research by neglecting the prominence of harmonizing match for purpose on research paradigm, the epistemological stance of the researcher, and the choice of methodology. Researchers have been reprimanded concerning this in “paradigmatic controversies, contradictions, and emerging confluences” (Lincoln & Guba, 2000 ).

In general, qualitative research tends to come from a very different paradigmatic stance and intrinsically demands distinctive and out-of-the-ordinary criteria for evaluating good research and varieties of research contributions that can be made. This review attempts to present a series of evaluative criteria for qualitative researchers, arguing that their choice of criteria needs to be compatible with the unique nature of the research in question (its methodology, aims, and assumptions). This review aims to assist researchers in identifying some of the indispensable features or markers of high-quality qualitative research. In a nutshell, the purpose of this systematic literature review is to analyze the existing knowledge on high-quality qualitative research and to verify the existence of research studies dealing with the critical assessment of qualitative research based on the concept of diverse paradigmatic stances. Contrary to the existing reviews, this review also suggests some critical directions to follow to improve the quality of qualitative research in different epistemological and ontological perspectives. This review is also intended to provide guidelines for the acceleration of future developments and dialogues among qualitative researchers in the context of assessing the qualitative research.

The rest of this review article is structured in the following fashion: Sect.  Methods describes the method followed for performing this review. Section Criteria for Evaluating Qualitative Studies provides a comprehensive description of the criteria for evaluating qualitative studies. This section is followed by a summary of the strategies to improve the quality of qualitative research in Sect.  Improving Quality: Strategies . Section  How to Assess the Quality of the Research Findings? provides details on how to assess the quality of the research findings. After that, some of the quality checklists (as tools to evaluate quality) are discussed in Sect.  Quality Checklists: Tools for Assessing the Quality . At last, the review ends with the concluding remarks presented in Sect.  Conclusions, Future Directions and Outlook . Some prospects in qualitative research for enhancing its quality and usefulness in the social and techno-scientific research community are also presented in Sect.  Conclusions, Future Directions and Outlook .

For this review, a comprehensive literature search was performed from many databases using generic search terms such as Qualitative Research , Criteria , etc . The following databases were chosen for the literature search based on the high number of results: IEEE Explore, ScienceDirect, PubMed, Google Scholar, and Web of Science. The following keywords (and their combinations using Boolean connectives OR/AND) were adopted for the literature search: qualitative research, criteria, quality, assessment, and validity. The synonyms for these keywords were collected and arranged in a logical structure (see Table 1 ). All publications in journals and conference proceedings later than 1950 till 2021 were considered for the search. Other articles extracted from the references of the papers identified in the electronic search were also included. A large number of publications on qualitative research were retrieved during the initial screening. Hence, to include the searches with the main focus on criteria for good qualitative research, an inclusion criterion was utilized in the search string.

From the selected databases, the search retrieved a total of 765 publications. Then, the duplicate records were removed. After that, based on the title and abstract, the remaining 426 publications were screened for their relevance by using the following inclusion and exclusion criteria (see Table 2 ). Publications focusing on evaluation criteria for good qualitative research were included, whereas those works which delivered theoretical concepts on qualitative research were excluded. Based on the screening and eligibility, 45 research articles were identified that offered explicit criteria for evaluating the quality of qualitative research and were found to be relevant to this review.

Figure  1 illustrates the complete review process in the form of PRISMA flow diagram. PRISMA, i.e., “preferred reporting items for systematic reviews and meta-analyses” is employed in systematic reviews to refine the quality of reporting.

figure 1

PRISMA flow diagram illustrating the search and inclusion process. N represents the number of records

Criteria for Evaluating Qualitative Studies

Fundamental criteria: general research quality.

Various researchers have put forward criteria for evaluating qualitative research, which have been summarized in Table 3 . Also, the criteria outlined in Table 4 effectively deliver the various approaches to evaluate and assess the quality of qualitative work. The entries in Table 4 are based on Tracy’s “Eight big‐tent criteria for excellent qualitative research” (Tracy, 2010 ). Tracy argues that high-quality qualitative work should formulate criteria focusing on the worthiness, relevance, timeliness, significance, morality, and practicality of the research topic, and the ethical stance of the research itself. Researchers have also suggested a series of questions as guiding principles to assess the quality of a qualitative study (Mays & Pope, 2020 ). Nassaji ( 2020 ) argues that good qualitative research should be robust, well informed, and thoroughly documented.

Qualitative Research: Interpretive Paradigms

All qualitative researchers follow highly abstract principles which bring together beliefs about ontology, epistemology, and methodology. These beliefs govern how the researcher perceives and acts. The net, which encompasses the researcher’s epistemological, ontological, and methodological premises, is referred to as a paradigm, or an interpretive structure, a “Basic set of beliefs that guides action” (Guba, 1990 ). Four major interpretive paradigms structure the qualitative research: positivist and postpositivist, constructivist interpretive, critical (Marxist, emancipatory), and feminist poststructural. The complexity of these four abstract paradigms increases at the level of concrete, specific interpretive communities. Table 5 presents these paradigms and their assumptions, including their criteria for evaluating research, and the typical form that an interpretive or theoretical statement assumes in each paradigm. Moreover, for evaluating qualitative research, quantitative conceptualizations of reliability and validity are proven to be incompatible (Horsburgh, 2003 ). In addition, a series of questions have been put forward in the literature to assist a reviewer (who is proficient in qualitative methods) for meticulous assessment and endorsement of qualitative research (Morse, 2003 ). Hammersley ( 2007 ) also suggests that guiding principles for qualitative research are advantageous, but methodological pluralism should not be simply acknowledged for all qualitative approaches. Seale ( 1999 ) also points out the significance of methodological cognizance in research studies.

Table 5 reflects that criteria for assessing the quality of qualitative research are the aftermath of socio-institutional practices and existing paradigmatic standpoints. Owing to the paradigmatic diversity of qualitative research, a single set of quality criteria is neither possible nor desirable. Hence, the researchers must be reflexive about the criteria they use in the various roles they play within their research community.

Improving Quality: Strategies

Another critical question is “How can the qualitative researchers ensure that the abovementioned quality criteria can be met?” Lincoln and Guba ( 1986 ) delineated several strategies to intensify each criteria of trustworthiness. Other researchers (Merriam & Tisdell, 2016 ; Shenton, 2004 ) also presented such strategies. A brief description of these strategies is shown in Table 6 .

It is worth mentioning that generalizability is also an integral part of qualitative research (Hays & McKibben, 2021 ). In general, the guiding principle pertaining to generalizability speaks about inducing and comprehending knowledge to synthesize interpretive components of an underlying context. Table 7 summarizes the main metasynthesis steps required to ascertain generalizability in qualitative research.

Figure  2 reflects the crucial components of a conceptual framework and their contribution to decisions regarding research design, implementation, and applications of results to future thinking, study, and practice (Johnson et al., 2020 ). The synergy and interrelationship of these components signifies their role to different stances of a qualitative research study.

figure 2

Essential elements of a conceptual framework

In a nutshell, to assess the rationale of a study, its conceptual framework and research question(s), quality criteria must take account of the following: lucid context for the problem statement in the introduction; well-articulated research problems and questions; precise conceptual framework; distinct research purpose; and clear presentation and investigation of the paradigms. These criteria would expedite the quality of qualitative research.

How to Assess the Quality of the Research Findings?

The inclusion of quotes or similar research data enhances the confirmability in the write-up of the findings. The use of expressions (for instance, “80% of all respondents agreed that” or “only one of the interviewees mentioned that”) may also quantify qualitative findings (Stenfors et al., 2020 ). On the other hand, the persuasive reason for “why this may not help in intensifying the research” has also been provided (Monrouxe & Rees, 2020 ). Further, the Discussion and Conclusion sections of an article also prove robust markers of high-quality qualitative research, as elucidated in Table 8 .

Quality Checklists: Tools for Assessing the Quality

Numerous checklists are available to speed up the assessment of the quality of qualitative research. However, if used uncritically and recklessly concerning the research context, these checklists may be counterproductive. I recommend that such lists and guiding principles may assist in pinpointing the markers of high-quality qualitative research. However, considering enormous variations in the authors’ theoretical and philosophical contexts, I would emphasize that high dependability on such checklists may say little about whether the findings can be applied in your setting. A combination of such checklists might be appropriate for novice researchers. Some of these checklists are listed below:

The most commonly used framework is Consolidated Criteria for Reporting Qualitative Research (COREQ) (Tong et al., 2007 ). This framework is recommended by some journals to be followed by the authors during article submission.

Standards for Reporting Qualitative Research (SRQR) is another checklist that has been created particularly for medical education (O’Brien et al., 2014 ).

Also, Tracy ( 2010 ) and Critical Appraisal Skills Programme (CASP, 2021 ) offer criteria for qualitative research relevant across methods and approaches.

Further, researchers have also outlined different criteria as hallmarks of high-quality qualitative research. For instance, the “Road Trip Checklist” (Epp & Otnes, 2021 ) provides a quick reference to specific questions to address different elements of high-quality qualitative research.

Conclusions, Future Directions, and Outlook

This work presents a broad review of the criteria for good qualitative research. In addition, this article presents an exploratory analysis of the essential elements in qualitative research that can enable the readers of qualitative work to judge it as good research when objectively and adequately utilized. In this review, some of the essential markers that indicate high-quality qualitative research have been highlighted. I scope them narrowly to achieve rigor in qualitative research and note that they do not completely cover the broader considerations necessary for high-quality research. This review points out that a universal and versatile one-size-fits-all guideline for evaluating the quality of qualitative research does not exist. In other words, this review also emphasizes the non-existence of a set of common guidelines among qualitative researchers. In unison, this review reinforces that each qualitative approach should be treated uniquely on account of its own distinctive features for different epistemological and disciplinary positions. Owing to the sensitivity of the worth of qualitative research towards the specific context and the type of paradigmatic stance, researchers should themselves analyze what approaches can be and must be tailored to ensemble the distinct characteristics of the phenomenon under investigation. Although this article does not assert to put forward a magic bullet and to provide a one-stop solution for dealing with dilemmas about how, why, or whether to evaluate the “goodness” of qualitative research, it offers a platform to assist the researchers in improving their qualitative studies. This work provides an assembly of concerns to reflect on, a series of questions to ask, and multiple sets of criteria to look at, when attempting to determine the quality of qualitative research. Overall, this review underlines the crux of qualitative research and accentuates the need to evaluate such research by the very tenets of its being. Bringing together the vital arguments and delineating the requirements that good qualitative research should satisfy, this review strives to equip the researchers as well as reviewers to make well-versed judgment about the worth and significance of the qualitative research under scrutiny. In a nutshell, a comprehensive portrayal of the research process (from the context of research to the research objectives, research questions and design, speculative foundations, and from approaches of collecting data to analyzing the results, to deriving inferences) frequently proliferates the quality of a qualitative research.

Prospects : A Road Ahead for Qualitative Research

Irrefutably, qualitative research is a vivacious and evolving discipline wherein different epistemological and disciplinary positions have their own characteristics and importance. In addition, not surprisingly, owing to the sprouting and varied features of qualitative research, no consensus has been pulled off till date. Researchers have reflected various concerns and proposed several recommendations for editors and reviewers on conducting reviews of critical qualitative research (Levitt et al., 2021 ; McGinley et al., 2021 ). Following are some prospects and a few recommendations put forward towards the maturation of qualitative research and its quality evaluation:

In general, most of the manuscript and grant reviewers are not qualitative experts. Hence, it is more likely that they would prefer to adopt a broad set of criteria. However, researchers and reviewers need to keep in mind that it is inappropriate to utilize the same approaches and conducts among all qualitative research. Therefore, future work needs to focus on educating researchers and reviewers about the criteria to evaluate qualitative research from within the suitable theoretical and methodological context.

There is an urgent need to refurbish and augment critical assessment of some well-known and widely accepted tools (including checklists such as COREQ, SRQR) to interrogate their applicability on different aspects (along with their epistemological ramifications).

Efforts should be made towards creating more space for creativity, experimentation, and a dialogue between the diverse traditions of qualitative research. This would potentially help to avoid the enforcement of one's own set of quality criteria on the work carried out by others.

Moreover, journal reviewers need to be aware of various methodological practices and philosophical debates.

It is pivotal to highlight the expressions and considerations of qualitative researchers and bring them into a more open and transparent dialogue about assessing qualitative research in techno-scientific, academic, sociocultural, and political rooms.

Frequent debates on the use of evaluative criteria are required to solve some potentially resolved issues (including the applicability of a single set of criteria in multi-disciplinary aspects). Such debates would not only benefit the group of qualitative researchers themselves, but primarily assist in augmenting the well-being and vivacity of the entire discipline.

To conclude, I speculate that the criteria, and my perspective, may transfer to other methods, approaches, and contexts. I hope that they spark dialog and debate – about criteria for excellent qualitative research and the underpinnings of the discipline more broadly – and, therefore, help improve the quality of a qualitative study. Further, I anticipate that this review will assist the researchers to contemplate on the quality of their own research, to substantiate research design and help the reviewers to review qualitative research for journals. On a final note, I pinpoint the need to formulate a framework (encompassing the prerequisites of a qualitative study) by the cohesive efforts of qualitative researchers of different disciplines with different theoretic-paradigmatic origins. I believe that tailoring such a framework (of guiding principles) paves the way for qualitative researchers to consolidate the status of qualitative research in the wide-ranging open science debate. Dialogue on this issue across different approaches is crucial for the impending prospects of socio-techno-educational research.

Amin, M. E. K., Nørgaard, L. S., Cavaco, A. M., Witry, M. J., Hillman, L., Cernasev, A., & Desselle, S. P. (2020). Establishing trustworthiness and authenticity in qualitative pharmacy research. Research in Social and Administrative Pharmacy, 16 (10), 1472–1482.

Article   Google Scholar  

Barker, C., & Pistrang, N. (2005). Quality criteria under methodological pluralism: Implications for conducting and evaluating research. American Journal of Community Psychology, 35 (3–4), 201–212.

Bryman, A., Becker, S., & Sempik, J. (2008). Quality criteria for quantitative, qualitative and mixed methods research: A view from social policy. International Journal of Social Research Methodology, 11 (4), 261–276.

Caelli, K., Ray, L., & Mill, J. (2003). ‘Clear as mud’: Toward greater clarity in generic qualitative research. International Journal of Qualitative Methods, 2 (2), 1–13.

CASP (2021). CASP checklists. Retrieved May 2021 from https://casp-uk.net/casp-tools-checklists/

Cohen, D. J., & Crabtree, B. F. (2008). Evaluative criteria for qualitative research in health care: Controversies and recommendations. The Annals of Family Medicine, 6 (4), 331–339.

Denzin, N. K., & Lincoln, Y. S. (2005). Introduction: The discipline and practice of qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), The sage handbook of qualitative research (pp. 1–32). Sage Publications Ltd.

Google Scholar  

Elliott, R., Fischer, C. T., & Rennie, D. L. (1999). Evolving guidelines for publication of qualitative research studies in psychology and related fields. British Journal of Clinical Psychology, 38 (3), 215–229.

Epp, A. M., & Otnes, C. C. (2021). High-quality qualitative research: Getting into gear. Journal of Service Research . https://doi.org/10.1177/1094670520961445

Guba, E. G. (1990). The paradigm dialog. In Alternative paradigms conference, mar, 1989, Indiana u, school of education, San Francisco, ca, us . Sage Publications, Inc.

Hammersley, M. (2007). The issue of quality in qualitative research. International Journal of Research and Method in Education, 30 (3), 287–305.

Haven, T. L., Errington, T. M., Gleditsch, K. S., van Grootel, L., Jacobs, A. M., Kern, F. G., & Mokkink, L. B. (2020). Preregistering qualitative research: A Delphi study. International Journal of Qualitative Methods, 19 , 1609406920976417.

Hays, D. G., & McKibben, W. B. (2021). Promoting rigorous research: Generalizability and qualitative research. Journal of Counseling and Development, 99 (2), 178–188.

Horsburgh, D. (2003). Evaluation of qualitative research. Journal of Clinical Nursing, 12 (2), 307–312.

Howe, K. R. (2004). A critique of experimentalism. Qualitative Inquiry, 10 (1), 42–46.

Johnson, J. L., Adkins, D., & Chauvin, S. (2020). A review of the quality indicators of rigor in qualitative research. American Journal of Pharmaceutical Education, 84 (1), 7120.

Johnson, P., Buehring, A., Cassell, C., & Symon, G. (2006). Evaluating qualitative management research: Towards a contingent criteriology. International Journal of Management Reviews, 8 (3), 131–156.

Klein, H. K., & Myers, M. D. (1999). A set of principles for conducting and evaluating interpretive field studies in information systems. MIS Quarterly, 23 (1), 67–93.

Lather, P. (2004). This is your father’s paradigm: Government intrusion and the case of qualitative research in education. Qualitative Inquiry, 10 (1), 15–34.

Levitt, H. M., Morrill, Z., Collins, K. M., & Rizo, J. L. (2021). The methodological integrity of critical qualitative research: Principles to support design and research review. Journal of Counseling Psychology, 68 (3), 357.

Lincoln, Y. S., & Guba, E. G. (1986). But is it rigorous? Trustworthiness and authenticity in naturalistic evaluation. New Directions for Program Evaluation, 1986 (30), 73–84.

Lincoln, Y. S., & Guba, E. G. (2000). Paradigmatic controversies, contradictions and emerging confluences. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp. 163–188). Sage Publications.

Madill, A., Jordan, A., & Shirley, C. (2000). Objectivity and reliability in qualitative analysis: Realist, contextualist and radical constructionist epistemologies. British Journal of Psychology, 91 (1), 1–20.

Mays, N., & Pope, C. (2020). Quality in qualitative research. Qualitative Research in Health Care . https://doi.org/10.1002/9781119410867.ch15

McGinley, S., Wei, W., Zhang, L., & Zheng, Y. (2021). The state of qualitative research in hospitality: A 5-year review 2014 to 2019. Cornell Hospitality Quarterly, 62 (1), 8–20.

Merriam, S., & Tisdell, E. (2016). Qualitative research: A guide to design and implementation. San Francisco, US.

Meyer, M., & Dykes, J. (2019). Criteria for rigor in visualization design study. IEEE Transactions on Visualization and Computer Graphics, 26 (1), 87–97.

Monrouxe, L. V., & Rees, C. E. (2020). When I say… quantification in qualitative research. Medical Education, 54 (3), 186–187.

Morrow, S. L. (2005). Quality and trustworthiness in qualitative research in counseling psychology. Journal of Counseling Psychology, 52 (2), 250.

Morse, J. M. (2003). A review committee’s guide for evaluating qualitative proposals. Qualitative Health Research, 13 (6), 833–851.

Nassaji, H. (2020). Good qualitative research. Language Teaching Research, 24 (4), 427–431.

O’Brien, B. C., Harris, I. B., Beckman, T. J., Reed, D. A., & Cook, D. A. (2014). Standards for reporting qualitative research: A synthesis of recommendations. Academic Medicine, 89 (9), 1245–1251.

O’Connor, C., & Joffe, H. (2020). Intercoder reliability in qualitative research: Debates and practical guidelines. International Journal of Qualitative Methods, 19 , 1609406919899220.

Reid, A., & Gough, S. (2000). Guidelines for reporting and evaluating qualitative research: What are the alternatives? Environmental Education Research, 6 (1), 59–91.

Rocco, T. S. (2010). Criteria for evaluating qualitative studies. Human Resource Development International . https://doi.org/10.1080/13678868.2010.501959

Sandberg, J. (2000). Understanding human competence at work: An interpretative approach. Academy of Management Journal, 43 (1), 9–25.

Schwandt, T. A. (1996). Farewell to criteriology. Qualitative Inquiry, 2 (1), 58–72.

Seale, C. (1999). Quality in qualitative research. Qualitative Inquiry, 5 (4), 465–478.

Shenton, A. K. (2004). Strategies for ensuring trustworthiness in qualitative research projects. Education for Information, 22 (2), 63–75.

Sparkes, A. C. (2001). Myth 94: Qualitative health researchers will agree about validity. Qualitative Health Research, 11 (4), 538–552.

Spencer, L., Ritchie, J., Lewis, J., & Dillon, L. (2004). Quality in qualitative evaluation: A framework for assessing research evidence.

Stenfors, T., Kajamaa, A., & Bennett, D. (2020). How to assess the quality of qualitative research. The Clinical Teacher, 17 (6), 596–599.

Taylor, E. W., Beck, J., & Ainsworth, E. (2001). Publishing qualitative adult education research: A peer review perspective. Studies in the Education of Adults, 33 (2), 163–179.

Tong, A., Sainsbury, P., & Craig, J. (2007). Consolidated criteria for reporting qualitative research (COREQ): A 32-item checklist for interviews and focus groups. International Journal for Quality in Health Care, 19 (6), 349–357.

Tracy, S. J. (2010). Qualitative quality: Eight “big-tent” criteria for excellent qualitative research. Qualitative Inquiry, 16 (10), 837–851.

Download references

Open access funding provided by TU Wien (TUW).

Author information

Authors and affiliations.

Faculty of Informatics, Technische Universität Wien, 1040, Vienna, Austria

Drishti Yadav

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Drishti Yadav .

Ethics declarations

Conflict of interest.

The author declares no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Yadav, D. Criteria for Good Qualitative Research: A Comprehensive Review. Asia-Pacific Edu Res 31 , 679–689 (2022). https://doi.org/10.1007/s40299-021-00619-0

Download citation

Accepted : 28 August 2021

Published : 18 September 2021

Issue Date : December 2022

DOI : https://doi.org/10.1007/s40299-021-00619-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative research
  • Evaluative criteria
  • Find a journal
  • Publish with us
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Adv Pract Oncol
  • v.12(4); 2021 May

Logo of jadpraconcol

Quality Improvement Projects and Clinical Research Studies

An external file that holds a picture, illustration, etc.
Object name is jadpro-12-360-g001.jpg

Every day, I witness firsthand the amazing things that advanced practitioners and nurse scientists accomplish. Through the conduct of quality improvement (QI) projects and clinical research studies, advanced practitioners and nurse scientists have the opportunity to contribute exponentially not only to their organizations, but also towards personal and professional growth.

Recently, the associate editors and staff at JADPRO convened to discuss the types of articles our readership may be interested in. Since we at JADPRO believe that QI projects and clinical research studies are highly valuable methods to improve clinical processes or seek answers to questions, you will see that we have highlighted various QI and research projects within the Research and Scholarship column of this and future issues. There have also been articles published in JADPRO about QI and research ( Gillespie, 2018 ; Kurtin & Taher, 2020 ). As a refresher, let’s explore the differences between a QI project and clinical research.

Quality Improvement

As leaders in health care, advanced practitioners often conduct QI projects to improve their internal processes or streamline clinical workflow. These QI projects use a multidisciplinary team comprising a team leader as well as nurses, PAs, pharmacists, physicians, social workers, and program administrators to address important questions that impact patients. Since QI projects use strategic processes and methods to analyze existing data and all patients participate, institutional review board (IRB) approval is usually not needed. Common frameworks, such as Lean, Six Sigma, and the Model for Improvement can be used. An attractive aspect of QI projects is that these are generally quicker to conduct and report on than clinical research, and often with quantifiable benefits to a large group within a system ( Table 1 ).

QI projectClinical research
Intended for a specific group or program Intended for future groups or future patients
Aligns with patient interest Benefit to patient is not known
All patients/participants are welcome to participate Patients/participants can opt out (consent), sampling
Arises from responsibility to patients Can arise from history of scandal
Strategic processes derived from existing dataSystematic research generates new data

Clinical Research

Conducting clinical research through an IRB-approved study is another area in which advanced practitioners and nurse scientists gain new knowledge and contribute to scientific evidence-based practice. Research is intended for specific groups of patients who are protected from harm through the IRB and ethical principles. Research can potentially benefit a larger group, but benefits to participants are often unknown during the study period.

Clinical research poses many challenges at various stages of what can be a lengthy process. First, the researcher conducts a review of the literature to identify gaps in existing knowledge. Then, the researcher must be diligent in their self-reflection (is this phenomenon worth studying?) and in developing the sampling and statistical methods to ensure validity and reliability of the research ( Higgins & Straub, 2006 ). A team of additional researchers and support staff is integral to completing the research and disseminating findings. A well-designed clinical trial is worth the time and effort it takes to answer important clinical questions.

So, as an advanced practitioner, would a QI project be better to conduct than a clinical research study? That depends. A QI project uses a specific process, measures, and existing data to improve outcomes in a specific group. A research study uses an IRB-approved study protocol, strategic methods, and generates new data to hopefully benefit a larger group.

In This Issue

Both QI projects and clinical research can provide evidence to base one’s interventions on and enhance the lives of patients in one way or another. I hope you will agree that this issue is filled with valuable information on a wide range of topics. In the following pages, you will learn about findings of a QI project to integrate palliative care into ambulatory oncology. In a phenomenological study, Carrasco explores patient communication preferences around cancer symptom reporting during cancer treatment.

We have two excellent review articles for you as well. Rogers and colleagues review the management of hematologic adverse events of immune checkpoint inhibitors, and Lemke reviews the evidence for use of ginseng in the management of cancer-related fatigue. In Grand Rounds, Flagg and Pierce share an interesting case of essential thrombocythemia in a 15-year-old, with valuable considerations in the pediatric population. May and colleagues review practical considerations for integrating biosimilars into clinical practice, and Moore and Thompson review BTK inhibitors in B-cell malignancies.

  • Higgins P. A., & Straub A. J. (2006). Understanding the error of our ways: Mapping the concepts of validity and reliability . Nursing Outlook , 54 ( 1 ), 23–29. 10.1016/j.outlook.2004.12.004 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gillespie T. W. (2018). Do the right study: Quality improvement projects and human subject research—both valuable, simply different . Journal of the Advanced Practitioner in Oncology , 9 ( 5 ), 471–473. 10.6004/jadpro.2018.9.5.1 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kurtin S. E., & Taher R. (2020). Clinical trial design and drug approval in oncology: A primer for the advanced practitioner in oncology . Journal of the Advanced Practitioner in Oncology , 11 ( 7 ), 736–751. 10.6004/jadpro.2020.11.7.7 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

How do you determine the quality of a journal article?

Published on October 17, 2014 by Bas Swaen . Revised on March 4, 2019.

In the theoretical framework of your thesis, you support the research that you want to perform by means of a literature review . Here, you are looking for earlier research about your subject. These studies are often published in the form of scientific articles in journals (scientific publications).

Table of contents

Why is good quality important, check the following points.

The better the quality of the articles that you use in the literature review , the stronger your own research will be. When you use articles that are not well respected, you run the risk that the conclusions you draw will be unfounded. Your supervisor will always check the article sources for the conclusions you draw.

We will use an example to explain how you can judge the quality of a scientific article. We will use the following article as our example:

Example article

Perrett, D. I., Burt, D. M., Penton-Voak, I. S., Lee, K. J., Rowland, D. A., & Edwards, R. (1999). Symmetry and Human Facial Attractiveness.  Evolution and Human Behavior ,  20 , 295-307. Retrieved from  http://www.grajfoner.com/Clanki/Perrett%201999%20Symetry%20Attractiveness.pdf

This article is about the possible link between facial symmetry and the attractiveness of a human face.

Prevent plagiarism. Run a free check.

1. where is the article published.

The journal (academic publication) where the article is published says something about the quality of the article. Journals are ranked in the Journal Quality List (JQL). If the journal you used is ranked at the top of your professional field in the JQL, then you can assume that the quality of the article is high.

The article from the example is published in the journal “Evolution and Human Behavior”. The journal is not on the Journal Quality List, but after googling the publication, it seems from multiple sources that it nevertheless is among the top in the field of Psychology (see Journal Ranking at   http://www.ehbonline.org/ ). The quality of the source is thus high enough to use it.

So, if a journal is not listed in the Journal Quality List then it is worthwhile to google it. You will then find out more about the quality of the journal.

2. Who is the author?

The next step is to look at who the author of the article is:

  • What do you know about the person who wrote the paper?
  • Has the author done much research in this field?
  • What do others say about the author?
  • What is the author’s background?
  • At which university does the author work? Does this university have a good reputation?

The lead author of the article (Perrett) has already done much work within the research field, including prior studies of predictors of attractiveness. Penton-Voak, one of the other authors, also collaborated on these studies. Perrett and Penton-Voak were in 1999 both professors at the University of St Andrews in the United Kingdom. This university is among the top 100 best universities in the world. There is less information available about the other authors. It could be that they were students who helped the professors.

3. What is the date of publication?

In which year is the article published? The more recent the research, the better. If the research is a bit older, then it’s smart to check whether any follow-up research has taken place. Maybe the author continued the research and more useful results have been published.

Tip! If you’re searching for an article in Google Scholar , then click on ‘Since 2014’ in the left hand column. If you can’t find anything (more) there, then select ‘Since 2013’. If you work down the row in this manner, you will find the most recent studies.

The article from the example was published in 1999. This is not extremely old, but there has probably been quite a bit of follow-up research done in the past 15 years. Thus, I quickly found via Google Scholar an article from 2013, in which the influence of symmetry on facial attractiveness in children was researched. The example article from 1999 can probably serve as a good foundation for reading up on the subject, but it is advisable to find out how research into the influence of symmetry on facial attractiveness has further developed.

4. What do other researchers say about the paper?

Find out who the experts are in this field of research. Do they support the research, or are they critical of it?

By searching in Google Scholar, I see that the article has been cited at least 325 times! This says then that the article is mentioned at least in 325 other articles. If I look at the authors of the other articles, I see that these are experts in the research field. The authors who cite the article use the article as support and not to criticize it.

5. Determine the quality

Now look back: how did the article score on the points mentioned above? Based on that, you can determine quality.

The example article scored ‘reasonable’ to ‘good’ on all points. So we can consider the article to be qualitatively good, and therefore it is useful in, for example, a literature review. Because the article is already somewhat dated, however, it is wise to also go in search of more recent research.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Swaen, B. (2019, March 04). How do you determine the quality of a journal article?. Scribbr. Retrieved August 29, 2024, from https://www.scribbr.com/tips/how-do-you-determine-the-quality-of-a-journal-article/

Is this article helpful?

Bas Swaen

"I thought AI Proofreading was useless but.."

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

quality assurance Recently Published Documents

Total documents.

  • Latest Documents
  • Most Cited Documents
  • Contributed Authors
  • Related Sources
  • Related Keywords

Quality Assurance Information System-The Case of the TEI of Athens

Systematic assessment of data quality and quality assurance/quality control (qa/qc) of current research on microplastics in biosolids and agricultural soils, sistem penjaminan mutu internal (spmi).

Abstract: The purpose of this research is to look at the educational achievements of students through an internal quality assurance system and as a tool to achieve and maintain school progress. Research, with a quantative approach. The data obtained is obtained through interview techniques, observations, and library studies. The results of the study were analyzed by using data reduction, presentation of data and drawing conclusions. The findings of the meaning of the importance of SPMI are implemented in elementary school educational institutions. The study was conducted at one of SMAN 3 Wajo's schools. The results of this study show that: (1) SPMI which is carried out continuously contributes to the acquisition of superior accreditation ratings. (2) The SPMI cycle that is carried out in its entirety has guided the course of various tasks from school stakeholders. (3) Quality culture can be created through the implementation of SPMI.Keywords: Internal Quality Assurance System; Quality of SMAN 3 Wajo School

Sigma metrics in quality control- An innovative tool

The clinical laboratory in today’s world is a rapidly evolving field which faces a constant pressure to produce quick and reliable results. Sigma metric is a new tool which helps to reduce process variability, quantitate the approximate number of analytical errors, and evaluate and guide for better quality control (QC) practices.To analyze sigma metrics of 16 biochemistry analytes using ERBA XL 200 Biochemistry analyzer, interpret parameter performance, compare analyzer performance with other Middle East studies and modify existing QC practices.This study was undertaken at a clinical laboratory for a period of 12 months from January to December 2020 for the following analytes: albumin (ALB), alanine amino transferase (SGPT), aspartate amino transferase (SGOT), alkaline phosphatase (ALKP), bilirubin total (BIL T), bilirubin direct (BIL D), calcium (CAL), cholesterol (CHOL), creatinine (CREAT), gamma glutamyl transferase (GGT), glucose (GLUC), high density lipoprotein (HDL), triglyceride (TG), total protein (PROT), uric acid (UA) and urea. The Coefficient of variance (CV%) and Bias % were calculated from internal quality control (IQC) and external quality assurance scheme (EQAS) records respectively. Total allowable error (TEa) was obtained using guidelines Clinical Laboratories Improvement Act guidelines (CLIA). Sigma metrics was calculated using CV%, Bias% and TEa for the above parameters. It was found that 5 analytes in level 1 and 8 analytes in level 2 had greater than 6 sigma performance indicating world class quality. Cholesterol, glucose (level 1 and 2) and creatinine level 1 showed &#62;4 sigma performance i.e acceptable performance. Urea (both levels) and GGT (level 1) showed &#60;3 sigma and were therefore identified as the problem analytes. Sigma metrics helps to assess analytic methodologies and can serve as an important self assessment tool for quality assurance in the clinical laboratory. Sigma metric evaluation in this study helped to evaluate the quality of several analytes and also categorize them from high performing to problematic analytes, indicating the utility of this tool. In conclusion, parameters showing lesser than 3 sigma need strict monitoring and modification of quality control procedure with change in method if necessary.

Quality assurance for on‐table adaptive magnetic resonance guided radiation therapy: A software tool to complement secondary dose calculation and failure modes discovered in clinical routine

Editorial comment: factors impacting us-lirads visualization scores—optimizing future quality assurance and standards, the association of laryngeal position on videolaryngoscopy and time taken to intubate using spatial point pattern analysis of prospectively collected quality assurance data, the impact of policy changes, dedicated funding and implementation support on early intervention programs for psychosis.

Introduction Early intervention services for psychosis (EIS) are associated with improved clinical and economic outcomes. In Quebec, clinicians led the development of EIS from the late 1980s until 2017 when the provincial government announced EIS-specific funding, implementation support and provincial standards. This provides an interesting context to understand the impacts of policy commitments on EIS. Our primary objective was to describe the implementation of EIS three years after this increased political involvement. Methods This cross-sectional descriptive study was conducted in 2020 through a 161-question online survey, modeled after our team's earlier surveys, on the following themes: program characteristics, accessibility, program operations, clinical services, training/supervision, and quality assurance. Descriptive statistics were performed. When relevant, we compared data on programs founded before and after 2017. Results Twenty-eight of 33 existing EIS completed the survey. Between 2016 and 2020, the proportion of Quebec's population having access to EIS rose from 46% to 88%; >1,300 yearly admissions were reported by surveyed EIS, surpassing governments’ epidemiological estimates. Most programs set accessibility targets; adopted inclusive intake criteria and an open referral policy; engaged in education of referral sources. A wide range of biopsychosocial interventions and assertive outreach were offered by interdisciplinary teams. Administrative/organisational components were less widely implemented, such as clinical/administrative data collection, respecting recommended patient-to-case manager ratios and quality assurance. Conclusion Increased governmental implementation support including dedicated funding led to widespread implementation of good-quality, accessible EIS. Though some differences were found between programs founded before and after 2017, there was no overall discernible impact of year of implementation. Persisting challenges to collecting data may impede monitoring, data-informed decision-making, and quality improvement. Maintaining fidelity and meeting provincial standards may prove challenging as programs mature and adapt to their catchment area's specificities and as caseloads increase. Governmental incidence estimates may need recalculation considering recent epidemiological data.

Current Status of Quality Assurance Scheme in Selected Undergraduate Medical Colleges of Bangladesh

This descriptive cross sectional study was carried out to determine the current status of Quality Assurance Scheme in undergraduate medical colleges of Bangladesh. This study was carried out in eight (four Government and four Non- Government) medical colleges in Bangladesh over a period from July 2015 to June 2016. The present study had an interview schedule with open question for college authority and another interview schedule with open question for head of department of medical college. Study revealed that 87.5% of college had Quality Assurance Scheme (QAS) in their college, 75% of college authority had regular meeting of academic coordination committee in their college, 50% of college had active Medical Education Unit in their college, 87.5% of college authority said positively on publication of journal in their college. In the present study researchers interviewed 53 heads of department with open question about distribution, collection of personal review form, submission with recommendation to the academic co-coordinator, and annual review meeting of faculty development. The researchers revealed from the interviews that there is total absence of this practice which is directed in national guidelines and tools for Quality Assurance Scheme (QAS) for medical colleges of Bangladesh. Bangladesh Journal of Medical Education Vol.13(1) January 2022: 33-39

AN APPLICATION OF CADASTRAL FABRIC SYSTEM IN IMPROVING POSITIONAL ACCURACY OF CADASTRAL DATABASES IN MALAYSIA

Abstract. Cadastral fabric is perceived as a feasible solution to improve the speed, efficiency and quality of the cadastral measurement data to implement Positional Accuracy Improvement (PAI) and to support Coordinated Cadastral System (CCS) and Dynamic Coordinated Cadastral System (DCCS) in Malaysia. In light of this, this study aims to propose a system to upgrade the positional accuracy of the existing cadastral system through the utilisation of the cadastral fabric system. A comprehensive investigation on the capability of the proposed system is carried out. A total of four evaluation aspects is incorporated in the study to investigate the feasibility and capability of the software, viz. performance of geodetic least squares adjustment, quality assurance techniques, supporting functions, and user friendliness. This study utilises secondary data obtained from the Department of Surveying and Mapping Malaysia (DSMM). The test area is coded as Block B21701 which is located in Selangor, Malaysia. Results show that least square adjustment for the entire network is completed in a timely manner. Various quality assurance techniques are implementable, namely error ellipses, magnitude of correction vectors and adjustment trajectory, as well as inspection of adjusted online bearings. In addition, the system supports coordinate versioning, coordinates of various datum or projection. Last but not least, user friendliness of the system is identified through the software interface, interaction and automation functions. With that, it is concluded that the proposed system is highly feasible and capable to create a Cadastral Fabric to improve the positional accuracy of existing cadastral system used in Malaysia.

Export Citation Format

Share document.

Total Quality Management Research Paper Topics

Academic Writing Service

Total quality management research paper topics have grown to become an essential area of study, reflecting the critical role that quality assurance and continuous improvement play in modern organizations. This subject encompasses a wide array of topics, methodologies, and applications, all aimed at enhancing operational efficiency, customer satisfaction, and competitive advantage. The purpose of this text is to provide students, researchers, and practitioners with a comprehensive guide on various aspects of total quality management (TQM). It includes an extensive list of potential research paper topics categorized into ten main sections, a detailed article explaining the principles and practices of TQM, guidelines on how to choose and write on TQM topics, and an introduction to iResearchNet’s custom writing services that cater to this field. This comprehensive resource aims to assist students in navigating the complex landscape of TQM, inspiring insightful research, and offering practical tools and support for academic success.

100 Total Quality Management Research Paper Topics

Total Quality Management (TQM) has evolved to become a strategic approach to continuous improvement and operational excellence. It has applications across various industries, each with its unique challenges and opportunities. Below is an exhaustive list of TQM research paper topics, divided into ten categories, offering a rich source of ideas for students and researchers looking to explore this multifaceted domain.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% off with 24start discount code.

Total Quality Management transcends traditional boundaries and integrates concepts from various disciplines. Its goal is to create a culture where quality is at the forefront of every decision and process. The following list presents 100 TQM research topics divided into ten different categories. Each category represents a specific aspect of TQM, providing an extensive foundation for exploring this complex field.

  • Historical Development of TQM
  • Core Principles of TQM
  • TQM and Organizational Culture
  • Deming’s 14 Points: A Critical Analysis
  • Six Sigma and TQM: A Comparative Study
  • TQM in Manufacturing: Case Studies
  • TQM and Leadership: Role and Responsibilities
  • Customer Focus in TQM
  • Employee Involvement in TQM Practices
  • Challenges in Implementing TQM
  • TQM in Healthcare
  • TQM in Education
  • TQM in the Automotive Industry
  • TQM in the Food and Beverage Industry
  • TQM in Information Technology
  • TQM in Hospitality
  • TQM in the Banking Sector
  • TQM in Construction
  • TQM in Supply Chain Management
  • TQM in Government Services
  • Statistical Process Control in TQM
  • The 5S Method in Quality Management
  • Kaizen and Continuous Improvement
  • Root Cause Analysis in TQM
  • Quality Function Deployment (QFD)
  • The Fishbone Diagram in TQM
  • Process Mapping and Quality Improvement
  • Benchmarking for Quality Enhancement
  • The Role of FMEA in Quality Management
  • Design of Experiments (DOE) in TQM
  • ISO 9001 and Quality Management
  • The Benefits of ISO 14001
  • Understanding Six Sigma Certifications
  • The Impact of OHSAS 18001 on Safety Management
  • Lean Manufacturing and Quality Standards
  • Implementation of ISO 22000 in Food Safety
  • The Role of ISO/IEC 17025 in Testing Laboratories
  • Quality Management in ISO 27001 (Information Security)
  • Achieving CE Marking for Product Safety
  • The Influence of SA 8000 on Social Accountability
  • Measuring Customer Satisfaction in TQM
  • The Role of Service Quality in Customer Retention
  • Customer Complaints and Quality Improvement
  • Building Customer Loyalty Through TQM
  • Customer Feedback and Continuous Improvement
  • Customer Relationship Management (CRM) and TQM
  • Emotional Intelligence and Customer Satisfaction
  • The Impact of Branding on Customer Loyalty
  • Customer Experience Management in TQM
  • Customer Segmentation and Targeting in TQM
  • The Role of Training in TQM
  • Employee Empowerment in Quality Management
  • Motivational Theories and TQM
  • Building a Quality Culture Through Employee Engagement
  • Employee Recognition and Reward Systems in TQM
  • Leadership Styles and Employee Performance in TQM
  • Communication and Teamwork in TQM
  • Managing Change in TQM Implementation
  • Conflict Resolution Strategies in TQM
  • Work-Life Balance in a Quality-Oriented Organization
  • Key Performance Indicators (KPIs) in TQM
  • Balanced Scorecard and Quality Management
  • Performance Appraisals in a TQM Environment
  • Continuous Monitoring and Evaluation in TQM
  • Risk Management in Quality Performance
  • Process Auditing and Quality Control
  • The Role of Quality Circles in Performance Evaluation
  • Value Stream Mapping and Process Optimization
  • The Impact of E-business on Quality Performance
  • Outsourcing and Quality Assurance
  • Environmental Sustainability and TQM
  • Social Responsibility and Ethical Practices in TQM
  • Green Manufacturing and Environmental Performance
  • Corporate Social Responsibility (CSR) Strategies in TQM
  • Waste Reduction and Recycling in TQM
  • Community Engagement and Social Impact
  • Sustainable Development Goals (SDGs) and TQM
  • Energy Efficiency and Sustainable Quality Management
  • Ethical Sourcing and Supply Chain Responsibility
  • Human Rights and Labor Practices in TQM
  • TQM Practices in Different Cultures
  • The Influence of Globalization on TQM
  • Cross-Cultural Communication and Quality Management
  • International Regulations and Quality Standards
  • TQM in Emerging Economies
  • Quality Management in Multinational Corporations
  • The Role of WTO in Global Quality Standards
  • Outsourcing and Global Supply Chain Quality
  • Global Competition and Quality Strategies
  • International Collaboration and Quality Innovation
  • Technological Innovations and Quality Management
  • Big Data and Analytics in TQM
  • Quality 4.0 and the Role of IoT
  • Artificial Intelligence and Quality Prediction
  • The Impact of Social Media on Quality Perception
  • Sustainability and Future Quality Management
  • Agile Methodologies and Quality Flexibility
  • Blockchain Technology and Quality Traceability
  • Cybersecurity and Quality Assurance
  • The Future Role of Human Resource in Quality Management

The vast array of topics listed above provides a comprehensive insight into the dynamic and multifaceted world of Total Quality Management. From foundational principles to future trends, these topics offer students a diverse range of perspectives to explore, understand, and contribute to the ongoing dialogue in TQM. With proper guidance, dedication, and an open mind, scholars can delve into these subjects to create impactful research papers, case studies, or projects that enrich the existing body of knowledge and drive further innovation in the field. Whether one chooses to focus on a specific industry, a particular tool, or an emerging trend, the possibilities are endless, and the journey towards quality excellence is both challenging and rewarding.

Total Quality Management and the Range of Research Paper Topics

Total Quality Management (TQM) represents a comprehensive and structured approach to organizational management that seeks to improve the quality of products and services through ongoing refinements in response to continuous feedback. This article aims to provide an in-depth exploration of TQM, shedding light on its evolution, its underlying principles, and the vast range of research topics it offers.

Historical Background

Total Quality Management has its roots in the early 20th century, with the development of quality control and inspection processes. However, it wasn’t until the mid-1980s that TQM became a formalized, systematic approach, greatly influenced by management gurus like W. Edwards Deming, Joseph Juran, and Philip Crosby.

  • Early Quality Control Era : During the industrial revolution, emphasis on quality control began, primarily focusing on product inspection.
  • Post-World War II Era : The concept of quality management grew as the U.S. sought to rebuild Japan’s industry. Deming’s teachings on quality greatly influenced Japanese manufacturing.
  • TQM’s Formalization : The integration of quality principles into management practices led to the formalization of TQM, encompassing a holistic approach towards quality improvement.

Principles of Total Quality Management

TQM is underpinned by a set of core principles that guide its implementation and contribute to its success. Understanding these principles is fundamental to any research into TQM.

  • Customer Focus : At the heart of TQM is a strong focus on customer satisfaction, aiming to exceed customer expectations.
  • Continuous Improvement : TQM promotes a culture of never-ending improvement, addressing small changes that cumulatively lead to substantial improvement over time.
  • Employee Engagement : Engaging employees at all levels ensures that everyone feels responsible for achieving quality.
  • Process Approach : Focusing on processes allows organizations to optimize performance by understanding how different processes interrelate.
  • Data-Driven Decision Making : Utilizing data allows for objective assessment and decision-making.
  • Systematic Approach to Management : TQM requires a strategic approach that integrates organizational functions and processes to achieve quality objectives.
  • Social Responsibility : Considering societal well-being and environmental sustainability is key in TQM.

Scope and Application

Total Quality Management is applicable across various domains and industries. The following areas showcase the versatility of TQM:

  • Manufacturing : Implementing TQM principles in manufacturing ensures efficiency and consistency in production processes.
  • Healthcare : TQM in healthcare focuses on patient satisfaction, error reduction, and continuous improvement.
  • Education : In educational institutions, TQM can be used to improve the quality of education through better administrative processes and teaching methods.
  • Service Industry : Whether in hospitality, banking, or IT, TQM’s principles can enhance service quality and customer satisfaction.
  • Public Sector : Governmental bodies and agencies can also employ TQM to enhance public service delivery and satisfaction.

TQM’s multifaceted nature offers a wide range of research paper topics. Some areas of interest include:

  • TQM Tools and Techniques : Research on tools like Six Sigma, Kaizen, and statistical process control.
  • Quality Standards : Investigating the impact and implementation of ISO standards.
  • Industry-Specific Applications : Exploring how TQM is applied and adapted in different industries.
  • Challenges and Opportunities : Assessing the difficulties and advantages of implementing TQM in contemporary business environments.
  • Emerging Trends : Examining future trends in TQM, such as the integration of technology and sustainability considerations.

Total Quality Management has evolved from a simple focus on product inspection to a strategic approach to continuous improvement that permeates the entire organization. Its application is not confined to manufacturing but has spread across various sectors and industries.

Research in TQM is equally diverse, offering students and scholars a rich and complex field to explore. Whether delving into the historical evolution of TQM, examining its principles, evaluating its application in different sectors, or exploring its myriad tools and techniques, the study of TQM is vibrant and multifaceted.

By undertaking research in Total Quality Management, one not only contributes to the academic body of knowledge but also plays a role in shaping organizational practices that emphasize quality, efficiency, customer satisfaction, and social responsibility. In a global business environment characterized by competitiveness, complexity, and constant change, the principles and practices of TQM remain more relevant than ever.

How to Choose Total Quality Management Research Paper Topics

Choosing the right topic for a research paper in Total Quality Management (TQM) is a crucial step in ensuring that your paper is both engaging and academically relevant. The selection process should align with your interests, the academic requirements, the targeted audience, and the available resources for research. Here is an in-depth guide, including an introductory paragraph, ten essential tips, and a concluding paragraph to help you make an informed choice.

Total Quality Management encompasses a broad spectrum of theories, tools, techniques, and applications across various industries. This richness and diversity offer a plethora of potential research topics. However, selecting the perfect one can be daunting. The following tips are designed to guide students in choosing a research topic that resonates with their interests and the current trends in TQM.

  • Identify Your Area of Interest : TQM has many facets, such as principles, tools, applications, challenges, and trends. Pinpointing the area that piques your interest will help in narrowing down your topic.
  • Consider Academic Relevance : Your chosen topic should align with your course objectives and academic guidelines. Consult your professor or academic advisor to ensure that the topic fits the scope of your course.
  • Research Current Trends : Stay up-to-date with the latest developments in TQM by reading scholarly articles, attending conferences, or following industry leaders. Current trends may inspire a relevant and timely topic.
  • Evaluate Available Resources : Make sure that your chosen topic has enough existing literature, data, and resources to support your research.
  • Assess the Scope : A too broad topic might be overwhelming, while a too narrow one might lack content. Balance the scope to ensure depth without over-extending.
  • Consider Practical Implications : If possible, choose a topic that has real-world applications. Connecting theory to practice makes your research more impactful.
  • Check Originality : Aim for a topic that offers a new perspective or builds on existing research in a unique way. Your contribution to the field should be clear and valuable.
  • Evaluate Your Expertise : Choose a topic that matches your level of expertise. Overly complex subjects might lead to difficulties, while overly simple ones might not challenge you enough.
  • Consider the Target Audience : Think about who will be reading your research paper. Tailoring your topic to the interests and expectations of your readers can make your paper more engaging.
  • Conduct a Preliminary Research : Before finalizing your topic, conduct some preliminary research to ensure there’s enough material to work with and that the topic is feasible within the given timeframe.

Selecting the right topic for a Total Quality Management research paper is a thoughtful and multifaceted process. It requires considering personal interests, academic requirements, current industry trends, available resources, and practical implications.

By following the guidelines provided, students can align their research with both personal and academic objectives, paving the way for a successful research experience. The ideal topic is one that not only aligns with the ever-evolving field of TQM but also resonates with the researcher’s passion and curiosity, laying the foundation for a meaningful and insightful investigation into the dynamic world of Total Quality Management.

How to Write a Total Quality Management Research Paper

Writing a Total Quality Management (TQM) research paper is a valuable endeavor that requires a clear understanding of the subject, strong analytical skills, and a methodical approach to research and writing. This guide outlines how to write an impressive research paper on TQM, including an introductory paragraph, ten actionable tips, and a concluding paragraph.

Total Quality Management is a comprehensive approach that emphasizes continuous improvement, customer satisfaction, employee involvement, and integrated management systems. Writing a research paper on TQM is not just an academic exercise; it is an exploration into the principles and practices that drive quality in organizations. The following detailed guidance aims to equip you with the necessary knowledge and skills to compose a compelling TQM research paper.

  • Understand the Basics of TQM : Start by immersing yourself in the foundational principles of TQM, including its history, methodologies, and various applications across industries. A deep understanding will form the basis of your research.
  • Choose a Specific Topic : As outlined in the previous section, select a specific and relevant topic that aligns with your interest and the current trends in the field of TQM.
  • Conduct Comprehensive Research : Use reputable sources such as academic journals, books, industry reports, and expert opinions to gather information. Always critically evaluate the reliability and relevance of your sources.
  • Create a Thesis Statement : Your thesis statement is the guiding force of your paper. It should be clear, concise, and articulate your main argument or focus.
  • Develop an Outline : Organize your research into a logical structure. An outline will guide you in presenting your ideas coherently and ensuring that you cover all essential points.
  • Write the Introduction : Introduce the topic, provide background information, and present the thesis statement. Make sure to engage the reader and provide a roadmap for the paper.
  • Compose the Body : Divide the body into sections and subsections that explore different aspects of your topic. Use evidence, examples, and logical reasoning to support your arguments.
  • Incorporate Case Studies and Examples : If applicable, include real-world examples or case studies that demonstrate the application of TQM principles in a practical context.
  • Write the Conclusion : Summarize the key findings, restate the thesis, and provide insights into the implications of your research. A strong conclusion leaves a lasting impression.
  • Revise and Edit : Pay attention to both content and form. Check for logical flow, coherence, grammar, and formatting. Consider seeking feedback from peers or professionals.

Writing a research paper on Total Quality Management is a complex but rewarding task. By understanding the fundamentals of TQM, selecting a precise topic, conducting thorough research, and following a structured writing process, students can produce a paper that not only meets academic standards but also contributes to the understanding of quality management in the modern world.

Emphasizing critical thinking, analytical prowess, and attention to detail, the journey of writing a TQM research paper enriches the student’s academic experience and provides valuable insights into the field that continues to shape organizations globally.

The strategies and tips provided in this guide serve as a roadmap for aspiring researchers, helping them navigate the challenges and triumphs of academic writing in the realm of Total Quality Management. With dedication, creativity, and adherence to scholarly standards, the result can be a meaningful and enlightening piece that resonates with both academics and practitioners alike.

iResearchNet Writing Services

For custom total quality management research paper.

Total Quality Management (TQM) research papers require a specialized approach, encompassing a wide array of methodologies, tools, and applications. iResearchNet, as a leading academic writing service provider, is committed to assisting students in crafting top-notch custom Total Quality Management research papers. Here’s a detailed look at the 13 standout features that make iResearchNet the ideal choice for your TQM research paper needs:

  • Expert Degree-Holding Writers : Our team of highly qualified writers possesses advanced degrees in management, business, and related disciplines, ensuring authoritative and insightful content tailored to Total Quality Management.
  • Custom Written Works : Every research paper we undertake is customized to your specific requirements, providing unique, plagiarism-free content that aligns with your academic objectives.
  • In-Depth Research : Equipped with access to vast academic and industry resources, our writers conduct comprehensive research, delivering TQM papers replete with the latest findings, theories, and applications.
  • Custom Formatting (APA, MLA, Chicago/Turabian, Harvard) : We adhere to your institution’s specific formatting guidelines, including the prevalent APA, MLA, Chicago/Turabian, and Harvard styles.
  • Top Quality : iResearchNet’s commitment to excellence ensures that each TQM research paper passes through stringent quality control, offering you not only well-crafted content but insightful and compelling perspectives.
  • Customized Solutions : We understand that every student’s needs are unique, and our services are designed to be flexible enough to cater to individual requirements, whether partial or end-to-end support.
  • Flexible Pricing : Our pricing structure is both competitive and transparent, reflecting the complexity, length, and urgency of your project without compromising on quality.
  • Short Deadlines up to 3 Hours : Even the most urgent projects with deadlines as short as 3 hours are manageable by our adept team.
  • Timely Delivery : Understanding the importance of punctuality, we ensure that every project is delivered within the agreed timeframe.
  • 24/7 Support : Our around-the-clock support team is always available to assist you, answer queries, and provide project updates.
  • Absolute Privacy : We prioritize your privacy, handling all personal and payment details with utmost confidentiality, ensuring that your information is never shared or resold.
  • Easy Order Tracking : Our user-friendly platform enables you to effortlessly track your order’s progress, maintaining control and direct communication with the writer.
  • Money Back Guarantee : Standing firmly behind the quality of our work, we offer a money-back guarantee, promising to make things right or refund your money if the delivered TQM research paper doesn’t meet the agreed standards.

iResearchNet takes pride in delivering excellence in custom Total Quality Management research paper writing. By combining the expertise of seasoned writers, comprehensive research capabilities, and a student-focused approach, we aim to facilitate academic success. Our carefully curated features provide a reliable, quality-driven solution to TQM research paper writing. Let iResearchNet guide you in creating exceptional, engaging, and authoritative papers in the realm of Total Quality Management.

Unleash Your Academic Potential with iResearchNet

At iResearchNet, we understand the complexity and nuance of crafting an impeccable Total Quality Management (TQM) research paper. As you explore the fascinating world of quality management principles, methodologies, and applications, our seasoned professionals are here to ensure that your academic pursuits reach new heights. Here’s why iResearchNet is your go-to partner for top-tier TQM research papers:

  • Tailored to Your Needs : From topic selection to final submission, our custom writing services are fine-tuned to meet your unique requirements. With a dedicated focus on Total Quality Management, our experts provide insightful, relevant, and comprehensive research that not only fulfills academic criteria but also fuels intellectual curiosity.
  • Quality You Can Trust : Quality isn’t just a subject we write about; it’s what defines us. Our commitment to academic excellence is evident in every paper we craft. Supported by thorough research, critical thinking, and precise alignment with your specifications, iResearchNet ensures a product that stands out in your academic journey.
  • Support at Every Step : We know that writing a TQM research paper is a process filled with questions and uncertainties. That’s why our team is available around the clock to support you. From understanding your assignment to addressing revisions, our 24/7 customer service provides peace of mind.
  • Invest in Your Success : With flexible pricing options, a robust money-back guarantee, and a seamless ordering process, iResearchNet makes it simple and risk-free to secure professional assistance for your Total Quality Management research paper. Embrace the opportunity to showcase your understanding of TQM principles through a well-articulated, compelling research paper.

Don’t let the challenges of writing a Total Quality Management research paper hold you back. Tap into the expertise and resources of iResearchNet, where we transform your academic goals into reality. Your perfect Total Quality Management research paper is just a click away!

ORDER HIGH QUALITY CUSTOM PAPER

research paper about quality

  • Open access
  • Published: 29 May 2021

Big data quality framework: a holistic approach to continuous quality management

  • Ikbal Taleb 1 ,
  • Mohamed Adel Serhani   ORCID: orcid.org/0000-0001-7001-3710 2 ,
  • Chafik Bouhaddioui 3 &
  • Rachida Dssouli 4  

Journal of Big Data volume  8 , Article number:  76 ( 2021 ) Cite this article

34k Accesses

43 Citations

4 Altmetric

Metrics details

Big Data is an essential research area for governments, institutions, and private agencies to support their analytics decisions. Big Data refers to all about data, how it is collected, processed, and analyzed to generate value-added data-driven insights and decisions. Degradation in Data Quality may result in unpredictable consequences. In this case, confidence and worthiness in the data and its source are lost. In the Big Data context, data characteristics, such as volume, multi-heterogeneous data sources, and fast data generation, increase the risk of quality degradation and require efficient mechanisms to check data worthiness. However, ensuring Big Data Quality (BDQ) is a very costly and time-consuming process, since excessive computing resources are required. Maintaining Quality through the Big Data lifecycle requires quality profiling and verification before its processing decision. A BDQ Management Framework for enhancing the pre-processing activities while strengthening data control is proposed. The proposed framework uses a new concept called Big Data Quality Profile. This concept captures quality outline, requirements, attributes, dimensions, scores, and rules. Using Big Data profiling and sampling components of the framework, a faster and efficient data quality estimation is initiated before and after an intermediate pre-processing phase. The exploratory profiling component of the framework plays an initial role in quality profiling; it uses a set of predefined quality metrics to evaluate important data quality dimensions. It generates quality rules by applying various pre-processing activities and their related functions. These rules mainly aim at the Data Quality Profile and result in quality scores for the selected quality attributes. The framework implementation and dataflow management across various quality management processes have been discussed, further some ongoing work on framework evaluation and deployment to support quality evaluation decisions conclude the paper.

Introduction

Big Data is universal [ 1 ], it consists of large volumes of data, with unconventional types. These types may be structured, unstructured, or in a continuous motion. Either it is used by the industry and governments or by research institutions, a new way to handle Big Data from a technology perspective to research approaches in its management is highly required to support data-driven decisions. The expectation from Big Data analytics varies from trends finding to pattern discovery in different application domains such as healthcare, businesses, and scientific exploration. The aim is to extract significant insights and decisions. Extracting this precious information from large datasets is not an easy task. A devoted planning and appropriate selection of tools and techniques are available to optimize the exploration of Big Data.

Owning a huge amount of data does not often lead to valuable insights and decisions since Big Data does not necessarily mean Big insights. In fact, it can complicate the processes involved in fulfilling such expectations. Also, a lot of resources may be required, in addition to adapting the existing analytics algorithms to cope with Big Data requirements. Generally, data is not ready to be processed as it is. It should go through many stages, including cleansing and pre-processing, before undergoing any refining, evaluation, and preparation treatment for the next stages along its lifecycle.

Data Quality (DQ) is a very important aspect of Big Data for assessing the aforementioned pre-processing data transformations. This is because Big Data is mostly obtained from the web, social networks, and the IoT, where they may be found in a structured or unstructured form with no schema and eventually with no quality properties. Exploring data profiling, and more specifically, DQ profiling is essential before data preparation and pre-processing for both structured and unstructured data. Also, a DQ assessment should be conducted for all data-related content, including attributes and features. Then, an analysis of the assessment results can provide the necessary elements to enhance, control, monitor, and enforce the DQ along the Big Data lifecycle; for example, maintaining high Data Quality (conforming to its requirements) in the processing phase.

Data Quality has been an active and attractive research area for several years [ 2 , 3 ]. In the context of Big Data, quality assessment processes are hard to implement, since they are time- and cost-consuming, especially for the pre-processing activities. These issues have got intensified since the available quality assessment techniques were developed initially for well-structured data and are not fully appropriate for Big Data. Consequently, new Data Quality processes must be carefully developed to assess the data origin, domain, format, and type. An appropriate DQ management scheme is critical when dealing with Big Data. Furthermore, Big Data architectures do not incorporate quality assessment practices throughout the Big Data lifecycle apart from pre-processing. Some new initiatives are still limited to specific applications [ 4 , 5 , 6 ]. However, the evaluation and estimation of Big Data Quality should be handled in all phases of the Big Data lifecycle from data inception to its analytics, thus support data-driven decisions.

The work presented in this paper is related to Big Data Quality management through the Big Data lifecycle. The objective of such a management perspective is to provide users or data scientists with a framework capable of managing DQ from its inception to its analytics and visualization, therefore support decisions. The definition of acceptable Big Data quality depends largely on the type of applications and Big Data requirements. The need for a quality Big Data evaluation before engaging in any Big Data related project is imminent. This is because the high costs involved in processing useless data at an early stage of its lifecycle can be prevented. More challenges to the data quality evaluation process may occur when dealing with unstructured, schema-less data collected from multiples sources. Moreover, a Big Data Quality Management Framework can provide quality management mechanisms to handle and ensure data quality throughout the Big Data lifecycle by:

Improving the processes of the Big Data lifecycle to be quality-driven, in a way that it integrates quality assessment (built-in) at every stage of the Big Data architecture.

Providing quality assessment and enhancement mechanisms to support cross-process data quality enforcement.

Introducing the concept of Big Data Quality Profile (DQP) to manage and trace the whole data pre-processing procedures from data source selection to final pre-processed data and beyond (processing and analytics).

Supporting profiling of data quality and quality rules discovery based on quantitative quality assessments.

Supporting deep quality assessment using qualitative quality evaluations on data samples obtained using data reduction techniques.

Supporting data-driven decision making based on the latest data assessments and analytics results.

The remainder of this paper is organized as follows. In Sect. " Overview and background ", we provide ample detail and background on Big Data and data quality, besides, the introduction of the problem statement, and the research objectives. The research literature related to Big Data quality assessment approaches is presented in Sect. " Related research studies ". The components of the proposed framework and an explanation of their main functionalities are described in Sect. " Big data quality management framework ". Finally, implementation discussion and dataflow management are detailed in Sect. " Implementations: Dataflow and quality processes development ", whereas Sect. " Conclusion " concludes the paper and points to our ongoing research developments.

Overview and background

An exponential increase in global inter-network activities and data storage has triggered the Big Data Era. Moreover, application domains, including Facebook, Amazon, Twitter, YouTube, Internet of Things Sensors, and mobile smartphones, are the main players and data generators. The amount of data generated daily is around 2.5 quintillion bytes (2.5 Exabyte, 1 EB = 1018 Bytes).

According to IBM, Big Data is a high-volume, high-velocity, and high-variety information asset that demands cost-effective, innovative forms of information processing for enhanced insights and decision-making. It is used to describe a massive volume of both structured and unstructured data; therefore, Big Data processing using traditional database and software tools is a difficult task. Big Data also refers to the technologies and storage facilities required by an organization to handle and manage large amounts of data.

Originally, in [ 7 ], the McKinsey Global Institute identifies three Big Data characteristics commonly known as ''3Vs'' for Volume, Variety, and Velocity [ 1 , 7 , 8 , 9 , 10 , 11 ]. These characteristics have been extended to more dimensions, moving to 10 Vs (Volume, Velocity, Variety, Veracity, Value, Vitality, Viscosity, Visualization, Vulnerability) [ 12 , 13 , 14 ].

In [ 10 , 15 , 16 ], the authors define important Big Data systems architectures. The data in Big Data comes from (1) heterogeneous data sources (e-Gov: Census data, Social networking: Facebook, and Web: Google page rank data), (2) data in different formats (video, text), and (3) data of various forms (unstructured: raw text data with no schema, and semi-structured: metadata, graph structure as text). Moreover, data travels through different stages, composing the Big Data lifecycle. Many aspects of Big Data architectures were compiled from the literature. Our enhanced design contributions are illustrated in Fig.  1 and described as follows:

Data generation: this is the phase of data creation. Many data sources can generate this data such as electrophysiology signals, sensors used to gather climate information, surveillance devices, posts to social media sites, videos and still images, transaction records, stock market indices, GPS location, etc.

Data acquisition: it consists of data collection, data transmission, and data pre-processing [ 1 , 10 ]. Due to the exponential growth and availability of heterogeneous data production sources, an unprecedented amount of structured, semi-structured, and unstructured data is available. Therefore, the Big Data Pre-Processing consists of typical data pre-processing activities: integration, enhancements and enrichment, transformation, reduction, discretization, and cleansing .

Data storage: it consists of the data center infrastructure, where the data is stored and distributed among several clusters and data centers, spread geographically around the world. The software storage is supported by the Hadoop ecosystem to ensure a certain degree of fault tolerance storage reliability and efficiency through replication. The data storage stage is responsible for all input and output data that circulates within the lifecycle.

Data analysis: (Processing, Analytics, and Visualization); it involves the application of data mining and machine learning algorithms to process the data and extract useful insights for better decision making. Data scientists are the most valuable users of this phase since they have the expertise to apply what is needed, on what must be analyzed.

figure 1

Big data lifecycle value chain

Data quality, quality dimensions, and metrics

The majority of studies in the area of DQ originate from the database context [ 2 , 3 ] and management research communities. According to [ 17 ], DQ is not an easy concept to define. Its definition is data domain awareness. There is a consensus that data quality always depends on the quality of the data source [ 18 ]. However, it highlights that enormous quality issues are hidden inside data and their values.

In the following, the definitions of data quality, data quality dimensions, and quality metrics and their measurements are given:

Data quality: It has many meanings that are related to data context, domain, area, and the fields from which it is used [ 19 , 20 ]. Academia interprets DQ differently than industry. In [ 21 ], data quality is reduced to “The capability of data to satisfy stated and implied needs when used under specified conditions”. Also, DQ is defined as “fitness for use”. Yet, [ 20 ] define data quality as the property corresponding to quality management, which is appropriate for use or meeting user needs.

Data quality dimensions: DQD’s are used to measure, quantify, and manage DQ [ 20 , 22 , 23 ]. Each quality dimension has a specific metric, which measures its performance. There are several DQDs, which can be organized into 4 categories according to [ 24 , 25 ], intrinsic, contextual, accessibility, and representational [ 14 , 15 , 22 , 24 , 26 , 27 ]. Two important categories (intrinsic and contextual) are illustrated in Fig.  2 . Examples of intrinsic quality dimensions are illustrated in Table 1 .

Metrics and measurements: Once the data is generated, its quality should be measured. This means that a data-driven strategy is considered to act on the data. Hence, it is mandatory to measure and quantify the DQD. Structured or semi-structured data is available as a set of attributes represented in columns or rows, and their values are respectively recorded. In [ 28 ], a quality metric, as a quantitative or categorical representation of one or more attributes, is defined. Any data quality metric should define whether the values of an attribute respect a targeted quality dimension. The author [ 29 ], quoted that data quality measurement metrics tend to evaluate binary results: correct or incorrect, or a value between 0 and 100 (with 100% representing the highest). This applies to some quality dimensions such as accuracy, completeness, consistency, and currency. Examples of DQD metrics are illustrated in Table 2 .

figure 2

Data quality dimensions

DQD’s must be relevant to data quality problems that have been identified. Thus, a metric tends to measure if attributes comply with defined DQD’s. These measurements are performed for each attribute, given their type and data ranges of values collected from the data profiling process. The measurements produce DQD’s scores for the designed metrics of all attributes [ 30 ]. Specific metrics need to be defined, to estimate specific quality dimensions of other data types such as images, videos, and audio [ 5 ].

Big data characteristics and data quality

The main Big Data characteristics, commonly named as V’s, are initially, Volume, Velocity, Variety, and Veracity. Since the Big Data inception, 10 V’s have been defined, and probably new Vs will be adopted [ 12 ]. For example, veracity tends to express and describe the trustworthiness of data, mostly known as data quality. The accuracy is often related to precision, reliability, and veracity [ 31 ]. Our tentative mapping among these characteristics, data, and data quality, is shown in Table 3 . It is based on the intuitive studies accomplished by [ 5 , 32 , 33 ]. In these studies, the authors attempted to link the V’s to the data quality dimensions. In another study, the authors [ 34 ] addressed the mapping of DQD Accuracy with the Big Data characteristic Volume and showed that the data size has an impact on DQ.

Big data lifecycle: where quality matters?

According to [ 21 , 35 ], data quality issues may appear in each phase of the Big Data value chain. Addressing data quality may follow different strategies, as each phase has its features either improving the quality of existing data or/and refining, reassessing, redesigning the whole processes, which generate and collect data, aiming at improving their quality.

Big Data quality issues were addressed by many studies in the literature [ 36 , 37 , 38 ]. These studies generally elaborated on the issues and proposed generic frameworks with no comprehensive approaches and techniques to manage quality across the Big Data lifecycle. Among these, generic frameworks are presented in [ 5 , 39 , 40 ].

In Fig.  3 , it is illustrated where data quality can and must be addressed in the Big Data value chain phases/stages from (1) to (7).

In the data generation phase, there is a need to define how and what data is generated.

In the data transmission phase, the data distribution scheme relies on the underlying networks. Unreliable networks may affect data transfer. Its quality is expressed by data loss and transmission errors.

Data collection refers to where, when, and how the data is collected and handled. Well-defined structured constraint verification on data must be established.

The pre-processing phase is one of the main focus points of the proposed work. It follows a data-driven strategy, which is largely focused on data. An evaluation process provides the necessary means to ensure the quality of data for the next phases. An evaluation of the DQ before (Pre) and after (Post) pre-processing on data samples is necessary to strengthen the DQP.

In the Big Data storage phase, some aspects of data quality, such as storage failure, are handled by replicating data on multiple storages. The latter is also valid for data transmission when a network fails to transmit data.

In the Data Processing and Analytics phases, the quality is influenced by both the applied process and data quality itself. Among the various data mining and machine learning algorithms and techniques suitable for Big Data, those that converge rapidly and consume fewer cloud resources will be highly adopted. The relation between DQ and the processing methods is substantial. A certain DQ requirement on these methods or algorithms might be imposed to ensure efficient performance.

Finally, for an ongoing iterative value chain, the visualization phase seems to be only a representation of the data in a fashionable way such as a dashboard. This helps the decision-makers to have a clear picture of the data and its valuable insights. Finally, in this work, Big Data is transformed into useful Small Data, which is easy to visualize and interpret.

figure 3

Where quality matters in big data lifecycle?

Data quality issues

Data quality issues generally appear when the quality requirements are not met on the data values [ 41 ]. These issues are due to several factors or processes having occurred at different levels:

Data source level: unreliability, trust, data copying, inconsistency, multi-sources, and data domain.

Generation level: human data entry, sensors’ readings, social media, unstructured data, and missing values.

Process level (acquisition: collection, transmission).

In [ 21 , 35 , 42 ], many causes of poor data quality were enumerated, and a list of elements, which affect the quality and DQD’s was produced. This list is illustrated in Table 4 .

Related research studies

Research directions on Big Data differ between industry and academia. Industry scientists mainly focus on the technical implementations, infrastructures, and solutions for Big Data management, whereas researchers from academia tackle theoretical issues of Big Data. Academia’s efforts mainly include the development of new algorithms for data analytics, data replication, data distribution, and optimization of data handling. In this section, the literature review is classified into 3 categories, which are described in the following sub-sections.

Data quality assessment approaches

Existing studies on data quality have been approached from different perspectives. In the majority of the papers, the authors agree that data quality is related to the phases or processes of its lifecycle [ 8 ]. Specifically, data quality is highly related to the data generation phases and/or with its origin. The methodologies adopted to assess data quality are based on traditional data strategies and should be adapted to Big Data. Moreover, the application domain and type of information (Content-based, Context-based, or Rating-based) affects the way the quality evaluation metrics are designed and applied. In content-based quality metrics, the information itself is used as a quality indicator, whereas in context-based metrics meta-data is used as quality indicators.

There are two main strategies to improve data quality according to [ 20 , 23 ]: data-driven and process-driven. The first strategy handles the data quality in the pre-processing phase by applying some pre-processing activities (PPA) such as cleansing, filtering, and normalization. These PPAs are important and occur before the data processing stage, preferably as early as possible. However, the process-driven quality strategy is applied to each stage of the Big Data value chain.

Data quality assessment was discussed early in the literature [ 10 ]. It is divided into two main categories: subjective and objective. Moreover, an approach that combines these two categories to provide organizations with usable data quality metrics to evaluate their data was proposed. However, the proposed approach was not developed to deal with Big Data.

In summary, Big Data quality should be addressed early in the pre-processing stage during the data lifecycle. The aforementioned Big Data quality challenges have not been investigated in the literature from all perspectives. There are still many open issues, which must be addressed especially at the pre-processing stage.

Rule-based quality methodologies

Since the data quality concept is context-driven, it may differ from an application domain to another. The definition of quality rules involves establishing a set of constraints on data generation, entry, and creation. Poor data can always exist, and rules are created or discovered to correct or eliminate this data. Rules themselves are only one part of the data quality assessment approach. The necessity to establish a consistent process for creating, discovering, and applying the quality rules should consider the following:

Characterize the quality of data being good or bad from its profile and quality requirements.

Select the data quality dimensions that apply to the data quality assessment context.

Generate quality rules based on data quality requirements, quantitative, and qualitative assessments.

Check, filter, optimize, validate, run, and test rules on data samples for efficient rules’ management.

Generate a statistical quality profile with quality rules. These rules represent an overview of successful valid rules with the expected quality levels.

Hereafter, the data quality rules are discovered from data quality evaluation. These rules will be used in Big Data pre-processing activities to improve the quality of data. The discovery process reveals many challenges, which should consider different factors, including data attributes, data quality dimensions, data quality rules discovery, and their relationship with pre-processing activities.

In (Lee et al., 2003), the authors concluded that the data quality problems depend on data, time, and context. Quality rules are applied to the data to solve and/or avoid quality problems. Accordingly, quality rules must be continuously assessed, updated, and optimized.

Most studies on the discovery of data quality rules come from the database community. These studies are often based on conditional functional dependencies (CFDs) to detect inconsistencies in data. CFDs are used to formulate data quality rules, which are generally expressed manually and discovered automatically using several CFD approaches [ 3 , 43 ].

Data quality assessment in Big Data has been addressed in several studies. In [ 32 ], a Data Quality-in-Use model was proposed to assess the quality of Big Data. Business rules for data quality are used to decide on which data these rules must meet the pre-defined constraints or requirements. In [ 44 ], a new quality assessment approach was introduced and involved both the data provider and the data consumer. The assessment was mainly based on data consistency rules provided as metadata.

The majority of research studies on data quality and discovery of data quality rules are based on CFD’s and database. In Big Data quality, the size, variety, and veracity of data are key characteristics that must be considered. These characteristics should be processed to reduce the quality assessment time and resources since they are handled before the pre-processing phase. Regarding quality rules, it is fundamental to consider these rules to eliminate poor data and enforce quality on existing data, while following a data-driven quality context.

Big data pre-processing frameworks

The pre-processing of data before performing any analytics is primeval. However, several challenges have emerged at this crucial phase of the Big Data value chain [ 10 ]. Data quality is one of these challenges, which must be highly considered in the Big Data context.

As pointed out in [ 45 ], data quality problems arise when dealing with multiple data sources. This increases the requirements for data cleansing significantly. Additionally, the large size of datasets, which arrive at an uncontrolled speed, generates an overhead on the cleansing processes. In [ 46 , 47 , 48 ], NADEEF, an extensible data cleaning system, was proposed. The extension for Big Data cleaning based on NADEEF was presented in [ 49 ] for streaming data. The system deals with data quality from the data cleaning activity using data quality rules and functional dependencies rules [ 14 ].

Numerous other studies on Big Data management frameworks exist. In these studies, the authors surveyed and proposed Big Data management models dealing with storage, pre-processing, and processing [ 50 , 51 , 52 ]. An up-to-date review of the techniques and methods for each process involved in the management processes is also included.

The importance of quality evaluation in Big Data Management has not been, generally, addressed. In some studies, Big Data characteristics are the only recommendations for quality. However, no mechanisms have been proposed to map or handle quality issues that might be a consequence of these Big Data Vs. A Big Data Management Framework, which includes data quality management, must be developed to cope with end-to-end quality management across the Big Data lifecycle.

Finally, it is worth mentioning that research initiatives and solutions on Big Data quality are still in their preliminary phase; there is much to do on the development and standardization of Big Data quality. Big Data quality is a multidisciplinary, complex, and multi-variant domain, where new evaluation techniques, processing and analytics algorithms, storage and processing technologies, and platforms will play a key role in the development and maturity of this active research area. We anticipate that researchers from academia will contribute to the development of new Big Data quality approaches, algorithms, and optimization techniques, which will advance beyond the traditional approaches used in databases and data warehouses. Additionally, industries will lead development initiatives of new platforms, solutions, and technologies optimized to support end-to-end quality management within the Big Data lifecycle.

Big data quality management framework

The purpose of proposing a Big Data Quality Management Framework (BDQMF) is to address the quality at all stages of the Big Data lifecycle. This can be achieved by managing data quality before and after the pre-processing stage while providing feedback at each stage and loop back to the previous phase, whenever possible. We also believe that data quality must be handled at data inception. However, this is not considered in this work.

To overcome the limitations of the existing Big Data architectures for managing data quality, a Big Data Quality pre-processing approach is proposed: a Quality Framework [ 53 ]. In our framework, the quality evaluation process tends to extract the actual quality status of Big Data and proposes efficient actions to avoid, eliminate, or enhance poor data, thus improving its quality. The framework features the creation and management of a DQP and its repository. The proposed scheme deals with data quality evaluation before and after the pre-processing phase. These practices are essential to ensure a certain quality level for the next phases while maintaining the optimal cost of the evaluation.

In this work, a quantitative approach is used. This approach consists of an end-to-end data quality management system that deals with DQ through the execution of pre-pre-processing tasks to evaluate BDQ on data. It starts with data sampling, data and DQ profiling, and gathering user DQ requirements. It then proceeds to DQD evaluation and discovery of Quality rules from quality scores and requirements. Each data quality rule is represented by one-to-many Pre-Processing Functions (PPF’s) under a specific Pre-Processing Activity (PPA). A PPA, such as cleansing, aims at increasing data quality. Pre-processing is applied to Big Data samples and re-evaluated once again to update and certify that the quality profile is complete. It is applied to the whole Big Dataset, not only to data samples. Before pre-processing, the DQP is tuned and revisited by quality experts for endorsement based on an equivalent data quality report. This report states the quality scores of the data, not the rules.

Framework description

The BDQM framework is illustrated in Fig.  4 , where all the components cooperate, relying on the Data Quality Profile. It is initially created as a Data Profile and is progressively extended from the data collection phase to the analytics phase to capture important quality-related information. For example, it contains quality requirements, targeted data quality dimensions, quality scores, and quality rules.

figure 4

Big data sources

Data lifecycle stages are part of the BDQMF. Generated feedbacks in all the stages are analyzed and used to correct, improve the data quality, and detect any DQ management related failures. The key components of the proposed BDQMF include:

Big Data Quality Project (Data Sources, Data Model, User/App Quality Requirements, Data domain),

Data Quality Profile and its Repository,

Data Preparation (Sampling and Profiling),

Exploratory Quality Profiling,

Quality Parameters and Mapping,

Quantitative Quality Evaluation,

Quality Control,

Quality Rules Discovery,

Quality Rules Validation,

Quality Rules Optimization,

Big Data Pre-Processing,

Data Processing,

Data Visualization, and

Quality Monitoring.

A detailed description of each of these components is provided hereafter.

Framework key components

In the following sub-sections, each component is described. Its input(s) and output(s), its main functions, and its roles and interactions with the other framework’s components, are also described. Consequently, at each Big Data stage, the Data Quality Profile is created, updated, and adapted until it achieves the quality requirements already set by the users or applications at the beginning of the Big Data Quality Project.

Big data quality project module

The Big Data Quality Project Module contains all the elements that define the data sources, and the quality requirements set by either the Big Data users or Big Data applications to represent the quality foundations of the Big Data project. As illustrated in Error! Reference source not found., any Big Data Quality Project should specify a set of quality requirements as targeted quality goals (Fig. 5 ).

It represents the first module of the framework. The Big Data quality project represents the starting point of the BDQMF, where specifications of the data model, data sources, and targeted quality goals for DQD and data attributes are defined. These requirements are represented as data quality scores/ratios, which express the acceptance level of the evaluated data quality dimensions. For example, 80% of data accuracy, 60% data completeness, and 85% data consistency are judged by quality experts as accepted levels (or tolerance ratios). These levels can be relaxed using a range of values, depending on the context, the application domain, and the targeted processing algorithm’s requirements.

Let us denote by BDQP(DS , DS’ , Req) a Big Data Quality Project Request that initiates many automatic processes:

A data sampling and profiling process.

An exploratory quality profiling process, which is included in many quality assessment procedures.

A pre-processing phase is eventually considered if the resulted quality scores are not met.

The BDQP contains the input dataset DS , output dataset DS’ , and Req . The Quality requirements are presented as a tuple of sets Req  = ( D , L , A ), where:

D represents a set of data quality dimensions DQD’s (e.g., accuracy, consistency): \({D}=\left\{{{\varvec{d}}}_{0},\dots ,{{\varvec{d}}}_{{\varvec{i}}},\dots ,{{\varvec{d}}}_{{\varvec{m}}}\right\},\)

L is a set of DQD acceptance (tolerance) level ratios (%) set by the user or the application related to the quality project and associated with each DQD, respectively: \({L}=\left\{{{\varvec{l}}}_{0},\dots ,{{\varvec{l}}}_{{\varvec{i}}},\dots ,{{\varvec{l}}}_{{\varvec{m}}}\right\},\)

A is the set of targeted data attributes. If it is not specified, the DQD’s are assessed for the dataset, which includes all possible attributes, since some dimensions need more detailed requirements to be assessed. Therefore, it depends on the DQD and the attribute type: \({A}=\left\{{{\varvec{a}}}_{0},\dots ,{{\varvec{a}}}_{{\varvec{i}}},\dots ,{{\varvec{a}}}_{{\varvec{m}}}\right\}\)

The Data quality requirements might be updated with some more aspects, whereas the profiling component provides well-detailed information about the data ( DQP Level 0 ). This update is performed within the quality mapping component and interfaces with user experts to refine, reconfirm, and restructure their data quality parameters over the data attributes.

Data sources: There are multiple Big Data sources. Most of them are generated from the new media (e.g., social media) based on the internet. Other data sources are based on the context of new technologies such as the cloud, sensors, and IoT. A list of Big Data sources is illustrated in Error! Reference source not found.

Data users, data applications, and quality requirements: This module identifies and specifies the input sources of the quality requirements parameters for the data sources. These sources include user’s quality requirements (e.g., Domain Experts, Researchers, Analysts, and Data scientists) or application quality requirements. (Applications may vary from simple data processing to machine learning applications or AI-based applications). For the users, a dashboard-like interface is used to capture user’s data requirements and other quality information. This interface can be enriched with information from the data sources as attributes and their types, if available. This can efficiently guide users to the inputs and ensure the right data is used. This phase can be initiated after sample profiling or exploratory quality profiling. Otherwise, a general quality request is entered in the form of targeted Data Quality dimensions and their expected quality scores after the pre-processing phase. All the quality requirements parameters and settings are recorded in the Data Quality Profile ( DQP 0 ). DQP Level 0 is created when the quality project is set.

The quality requirements are specifically set as quality score ratios, goals, or targets to be achieved by the BDQMF. They are expressed as targeted DQDs in the Big Data Quality Project.

Let us denote by Req , a set of quality requirements presented as Req = \(\left\{{{\varvec{r}}}_{0},\dots ,{{\varvec{r}}}_{{\varvec{i}}},\dots ,{{\varvec{r}}}_{{\varvec{m}}}\right\}\) and constructed with a tuple ( D , L, A ). The Req quality requirements list is identified by elements, where each of these elements is a quality requirement characterized by \({{\varvec{r}}}_{{\varvec{i}}}=\left({{\varvec{d}}}_{{\varvec{i}}},{{\varvec{l}}}_{{\varvec{i}}},{{\varvec{a}}}_{{\varvec{i}}}\right)\) ; \({{\varvec{r}}}_{{\varvec{i}}}\) represents a \({{\varvec{d}}}_{{\varvec{i}}}\) in the DQD with a minimum accepted ratio level \({{\varvec{l}}}_{{\varvec{i}}}\) for all or a sub-list of selected attributes \({{\varvec{a}}}_{{\varvec{i}}}.\)

The initial DQP originating from this module is a DQP Level 0, containing the following tuple, as illustrated in Fig.  6 : BDQP (DS, DS’, Req) with Req  =  ( D , L, A )

Data models and data domains

Data models: If the Data is structured, then a schema is provided to add more detailed quality settings for all attributes. In other cases, if there are no such attributes or types, the data is considered as unstructured data, and its quality evaluation will consist of a set of general Quality Indicators (QI). In our Framework, these QI are provided especially for the cases, where a direct identification of DQD’s is not available for an easy quality assessment.

Data domains: Each data domain has a unique set of default quality requirements. Some are very sensitive to accuracy and completeness; others, prioritize data currency and higher timeliness. This module adds value to users or applications when it comes to quality requirements elicitation.

figure 6

BDQP and quality requirements settings

figure 7

Exploratory quality profiling modules

Data quality profile creation: Once the Big Data Quality Project (BDQP) is initiated, the DQP level 0 (DQP0) is created and consists of the following elements, as illustrated in Fig. 7 :

Data sources information, which may include datasets, location, URL, origin, type, and size.

Information about data that can be created or extracted from metadata if available, such as database schema, data attributes names and types, data profile, or basic data profile.

Data domains such as business, health, commerce, or transportation.

Data users, which may include the names and positions of each member of the project, security credentials, and data access levels.

Data application platforms, software, programming languages, or applications that are used to process the data. These may include R, Python, Java, Julia, Orange, Rapid Miner, SPSS, Spark, and Hadoop.

Data quality requirements: for each dataset, its expected quality ratios, and tolerance levels are accepted; otherwise, the data is discarded or repaired. It can also be set as a range of quality tolerance levels. For example, the DQD completeness is defined as equal to or higher than 67%, which means the acceptance ratio of missing values, is equal to or less than 33% (100% –67%).

Data quality profile (DQP) and repository (DQPREPO)

We describe hereafter the content of DQP and the DQP repository and the DQP levels captured through the lifecycle of framework processes.

  • Data quality profile

The data quality profile is generated once a Big Data Quality Project is created. It contains, for example, information about the data sources, domain, attributes, or features. This information may be retrieved from metadata, data provenance, schema, or from the dataset itself. If not available, data preparation (sampling and profiling) is needed to collect and extract important information, which will support the upcoming processes, as the Data Profile (DP) is created.

An Exploratory Quality Profiling will generate a quality rules proposal list. The DP is updated with these rules and converted into a DQP. This will help the user to obtain an overview of some DQDs and make better attributes selection based on this first quality approximation with a ready-to-use list of rules for pre-processing.

The User/App quality requirements (Quality tolerance levels, DQDs, and targeted attributes) are set and added to the DQP. Updated and tuned-up previously proposed quality rules are more likely, or a complete redefinition of the quality requirement parameters is performed.

The mapping and selection phase will update the DQP with a DQES, which contains the set of attributes to be evaluated for a set of DQDs, using a set of metrics from the DQP repository.

The Quantitative Quality Evaluation component assesses the DQ and updates the DQES with DQD Scores.

The DQES scores pass through quality control if validated. The DQP is executed in the pre-processing stage and confirmed in the repository.

If the scores (based on the quality requirements) are not valid, a quality rules discovery, validation, and optimization will be added/updated to the DQP configuration to obtain a valid DQD score that satisfies the quality requirements.

A continuous quality monitoring is performed for an eventual DQ failure that triggers a DQP update.

The DQP Repository: The DQPREPO contains detailed data quality profiles per data source and dataset. In the following, an information list managed by the repository is presented:

Data Quality User/App requirements.

Data Profiles, Metadata, and Data Provenance.

Data Quality Profiles (e.g. Data Quality Evaluation Schemes, and Data Quality Rules).

Data Quality Dimensions and related Metrics (metrics formulas and aggregate functions).

Data Domains (DQD’s, BD Characteristics).

DQD’s vs BD Characteristics.

Pre-processing Activities (e.g. Cleansing, and Normalizing) and functions (to replace missing values).

DQD’s vs DQ Issues vs PPF: Pre-processing Functions.

DQD’s priority processing in Quality Rules.

At every stage, module, task, or process, the DQP repository is incrementally updated with quality-related information. This includes, for example, quality requirements, DQES, DQD scores, data quality rules, Pre-Processing activities, activity functions, DQD metrics, and Data Profiles. Moreover, the DQP’s are organized per Data Domain and datatype to allow reuse. Adaptation is performed in the case of additional Big Datasets.

In Table 5 , an example of DQP Repository managed information along with its preprocessing activities (PPA) and their related functions (PPAF), is presented.

DQP lifecycle (Levels) : The DQP goes through the complete process flow of the proposed BDQMF. It starts with the specification of the Big Data Quality Project and ends with quality monitoring as an ongoing process that closes the quality enforcement loop and triggers other processes, which handle DQP adaptation, upgrade, or reuse. In Table 6 , the various DQP levels and their interaction within the BDQM Framework components are described. Each component involves process operations applied to the DQP.

Data preparation: sampling and profiling

Data preparation generates representative Big Data samples that serve as an entry for profiling, quality evaluation, and quality rules validation.

Sampling: Several sampling strategies can be applied to Big Data as surveyed in [ 54 , 55 ]. In this work, the authors evaluated the effect of sampling methods on Big Data and concluded that the sampling of large datasets reduces the run-time and computational footprint of link prediction algorithms, maintaining an adequate prediction performance. In statistics, the Bootstrap sampling technique evaluates the sampling distribution of an estimator using sampling, which replaces the original samples. In the Big Data context, Bootstrap sampling has been studied in several works [ 56 , 57 ]. In the proposed data quality evaluation scheme, it was decided to use the Bag of Little Bootstrap (BLB) [ 58 ]. This combines the results of bootstrapping multiple small subsets of a Big Data dataset. The BLB algorithm employs an original Big Dataset, which is used to generate small samples without replacements. For each generated sample, another set of samples is created by re-sampling with replacements.

Profiling: The data profiling module performs the data quality screening based on statistics and information summary [ 59 , 60 , 61 ]. Since profiling is meant to discover data characteristics from data sources, it is considered as a data assessment process that provides a first summary of the data quality reported in its data profile. Such information includes, for example, data format description, different attributes their types, values, and basic quality dimensions’ evaluations, data constraints (if any), and data ranges (max and min, a set of specific values or subsets).

More precisely, the information about the data is presented in two types: technical and functional data. This information can be extracted from the data itself without any additional representation using metadata or any descriptive header file or by parsing the data using analysis tools. This task may become very costly in Big Data. Therefore, to avoid costs generated by the data size, the same sampling process (based on BLB) is used. Thus, the data is reduced to a representative population sample, in addition to the combination of profiling results. More precisely, a data profile in the proposed framework is represented as a data quality profile of the first level ( DQP1 ), which is generated after the profiling phase. Moreover, data profiling provides some useful information that leads to significant data quality rules, usually named as data constraints. These rules are mostly equivalent to a structured-data schema, which is represented as technical and functional rules.

According to [ 61 ], there are many activities and techniques used to profile the data. These may range from online, incremental, and structural, to continuous profiling. Profiling tasks aim at discovering information about the data schema. Some data sources are already provided with their data profiles, sometimes with minimal information. In the following, some other techniques are introduced. These techniques can enrich and bring value-added information to a data profile:

Data provenance inquiry : it tracks the data origin and provides information about data transformations, data copying, and its related data quality through the data lifecycle [ 62 , 63 , 64 ].

Metadata : it provides descriptive and structural information about the data. Many data types, such as images, videos, and documents, use metadata to provide deep information about their contents. Metadata can be represented in many formats, including XML, or it can be extracted directly from the data itself without any additional representation.

Data parsing (supervised/manual/automatic) : data parsing is required since not all the data has a provenance or metadata that describes the data. The hardest way to gather extra information about the data is to parse it. Automatic parsing can be initially applied. Then, it is tuned and supervised manually by a data expert. This task may become very costly when Big Data is concerned, especially in the case of unstructured data. Consequently, a data profile is generated to represent only certain parts of the data that make sense. Therefore, multiple data profiles for multiple data partitions must be taken into consideration.

Data profile : it is generated early in the Big Data Project as DQP Level 0 (Data profile in its early form) and upgraded as a data quality profile within the data preparation component as DQP Level 1. Then, it is updated and extended through all the components of the Big Data Quality Management Framework until it reaches a DQP Level 2 . The DQP Level 8 is the profile applied to the data in the pre-processing phase with its quality rules and related activities to output a pre-processed data conformed to the quality requirements.

Exploratory quality profiling

Since a data-driven approach that uses a quantitative approach to quality dimensions’ evaluation from the data itself is followed, two evaluation steps are adopted: Quantitative Quality Evaluation based on user requirements and Exploratory Quality Profiling.

The exploratory quality profiling component is responsible for automatic data quality dimensions’ exploration without user interventions. The Quality Rules Proposals module, which produces a list of actions to elevate data quality, is based on some elementary DQDs that fit all varieties and data types.

A list of quality rules proposition, which is based on the quality evaluation of the most likely considered DQDs (e.g., completeness, accuracy, and uniqueness), is produced. This preliminary assessment is performed based on the data itself and using predefined scenarios. These scenarios are meant to increase data quality for some basic DQDs. In Fig. 7 , the steps involved in the exploratory quality profiling for quality rules proposals generation are depicted. DQP1 is extended to DQP2, after adding the Data Quality Rules Proposal ( DQRP ), which is generated by the “quality rules proposals” process.

This module is part of the DQ profiling process, which varies the DQD tolerance levels from min to max scores and applies a systematic list of predefined quality rules. These predefined rules are a set of actions applied to the data when the measured DQD scores are not in the tolerance level defined by the min, max value scores. The actions vary from deleting only attributes, discarding only observations, or a combination of both. After these actions, a re-evaluation of the new DQD scores will lead to a quality rules proposal (DQRP) with known DQD target scores after performing an analysis. In Table 7 , some examples of these predefined rules scenarios for the DQD completeness ( dqd  =  Comp ) with an execution priority for each set of grouped actions, are described. The DQD levels are set to vary from a 5% to 95% tolerance score with a granularity step of 5. They can be set differently according to the DQD choice and its sensitivity to the data model and domain. The selection of the best-proposed data quality rules is based on the KNN algorithm using Euclidean distance (Deng et al. 2016.; [ 65 ]. It gives the closest quality rules parameters that achieve (by default) high completeness with less data reduction. The process might be refined by specifying other quality parameters.

A list of quality rules proposal based on quality evaluation of the most likely considered DQD’s (e.g., completeness, accuracy, and uniqueness), is produced. This preliminary assessment is based on the data itself using predefined scenarios. The quality rules are meant to increase data quality for some basic DQD’s. In Fig.  8 , the modules involved in the exploratory quality profiling for quality rules proposals generation, are illustrated.

figure 8

Quality rules proposals with exploratory quality profiling

Quality mapping and selection

The quality mapping and selection module of the BDQM framework is responsible for mapping data features or attributes to DQD’s to target pre-required quality evaluation scores. It generates a Data Quality Evaluation Scheme ( DQES ) and then adds it (updates) to the DQP. The DQES contains the DQD’s of the appropriate attributes to be evaluated using adequate metric formulas. The DQES, as a part of DQP, contains (for each of the selected data attributes) the following list, which is considered essential for the quantitative quality evaluation:

The attributes: all or a selected list,

The data quality dimensions (DQD’s) to be evaluated for each selected attribute,

Each DQD has a metric that returns the quality score, and

The quality requirement scores for each DQD needed in the score’s validation.

These requirements are general and target many global quality levels. The mapping component acts as a refinement of the global settings with precise qualities’ goals. Therefore, a mapping must be performed between the data quality dimensions and targeted data features/attributes before proceeding with the quality assessment. Each DQD is measured for each attribute and sample. The mapping generates a DQES , which contains Quality Evaluation Requests ( QER ) Q x . Each QER Q x targets a data quality dimension (DQD) for an attribute, all attributes, or a set of selected attributes, where x is the number of requests.

Quality mapping: Many approaches are available to accomplish an efficient mapping process. These include automatic, interactive, manual, and based on quality rules proposals techniques:

Automatic : it completes the alignment and comparison of the data attributes (from DQP) with the data quality requirements (either per attribute type, or name). A set of DQDs is associated with each attribute for quality evaluation. It results in a set of associations to be executed and evaluated in the quality assessment component.

Interactive : it relies on experts’ involvement to refine, amend, or confirm the previous automated associations.

Manual : it uses a similar but advanced dashboard to that illustrated in Error! Reference source not found. and a more detailed one in the attribute level.

Quality rules proposals : the proposal list collected from the DQP2 is used to obtain an understanding of the impact of a DQD level and the data reduction ratio. These quality insights help decide which DQD is best when compared to the quality requirements.

Quality selection (of DQD, Metrics and Attributes): It consists of a selection of an appropriate quality metric to evaluate data quality dimensions for an attribute of a Big Data sample set and returns a count of correct values, which comply with the metric formula. Each metric will be computed if the attribute values reflect the DQD constraints. For example, accuracy can be defined as a count of correct attributes in a certain range of values [v 1 , v 2 ]. Similarly, it can be defined to satisfy a certain number of constraints related to the type of data such as zip code, email, social security number, dates, or addresses.

Let us define the tuple DQES (S, D, A, M) . Most of the information is provided by the BDQP(DS , DS’ , Req) with Req  =  ( D , L, A ) parameters. The profiling information is used to select the appropriate quality metrics \({{\varvec{m}}}_{{\varvec{l}}}\) to evaluate the data quality dimensions \({{\varvec{q}}}_{{\varvec{l}}}\) for an attribute \({{\varvec{a}}}_{{\varvec{k}}}\) with a weight \({{\varvec{w}}}_{{\varvec{j}}}\) . In addition to the previous settings, let us consider the following: S : S ( DS , N , n, R ) \(\to\) \({{\varvec{S}}}_{{\varvec{i}}}\) a sampling strategy

Let us denote by M , a set of quality metrics \({\varvec{M}}=\left\{{{\varvec{m}}}_{1},..,{{\varvec{m}}}_{{\varvec{l}}},..,{{\varvec{m}}}_{{\varvec{d}}}\right\}\) where \({{\varvec{m}}}_{{\varvec{l}}}\) is a quality metric that measures and evaluates a DQD \({{\varvec{q}}}_{{\varvec{l}}}\) for each value of an attribute \({{\varvec{a}}}_{{\varvec{k}}}\) in the sample \({{\varvec{s}}}_{{\varvec{i}}}\) and returns 1, if correct, and 0, if not. Each \({{\varvec{m}}}_{{\varvec{l}}}\) metric will be computed if the value of the attribute reflects the \({{\varvec{q}}}_{{\varvec{l}}}\) constraint. For example, the accuracy of an attribute is defined as a range of values between 0 and 100. Otherwise, it is incorrect. If the same DQD \({{\varvec{q}}}_{{\varvec{l}}}\) is evaluated for a set of attributes, and if the weights are all equal, a simple mean is computed. The metric \({{\varvec{m}}}_{{\varvec{l}}}\) will be evaluated to measure if each attribute has its \({{\varvec{m}}}_{{\varvec{l}}}\) correct. This is performed for each instance (cell or row) of the sample \({{\varvec{s}}}_{{\varvec{i}}}\) .

Let us denote by \({{{\varvec{M}}}_{{\varvec{l}}}}^{\left(i\right)}, i=1,\dots ,{\varvec{N}}\) , a metric total \({{\varvec{m}}}_{{\varvec{l}}}\) , which evaluates and counts the number of observations that satisfy this metric, for a DQD \({{\varvec{q}}}_{{\varvec{l}}}\) of an attribute \({{\varvec{a}}}_{{\varvec{k}}}\) of N samples from the dataset DS . The proportion of observations under the adequacy rule is calculated by:

The proportion of observations under the adequacy rule in a sample \({{\varvec{s}}}_{{\varvec{i}}}\) is given by:

The total proportion of observations under the adequacy rule for all samples is given by:

where \({{\varvec{M}}}_{{\varvec{l}}}\) characterizes the \({{\varvec{q}}}_{{\varvec{l}}}\) mean score for the whole dataset.

Let \({{\varvec{Q}}}_{{\varvec{x}}}\left({{\varvec{a}}}_{{\varvec{k}}},{{\varvec{q}}}_{{\varvec{l}}},{{\varvec{m}}}_{{\varvec{l}}}\right)\) represents a request for a quality evaluation, which results in the mean quality score for a DQD \({{\varvec{q}}}_{{\varvec{l}}}\) for a measurable attribute \({{\varvec{a}}}_{{\varvec{k}}}\) calculated by M l . The process by which Big Data samples are evaluated for a DQD \({{\varvec{q}}}_{{\varvec{j}}}\) in a sample \({{\varvec{s}}}_{{\varvec{i}}}\) for an attribute \({{\varvec{a}}}_{{\varvec{k}}}\) with a metric \({{\varvec{m}}}_{{\varvec{l}}}\) , providing a \({{\varvec{q}}}_{{\varvec{l}}}{{\varvec{s}}}_{{\varvec{i}}}\) score for each sample (described below in Quantitative Quality Evaluation ). Then, a sample mean \({{\varvec{q}}}_{{\varvec{l}}}\) is the final score for \({{\varvec{a}}}_{{\varvec{k}}}\) .

Let us denote a process, which sorts and combines the requests of a quality evaluation (QER) by DQD or by an attribute, resulting in a re-arrangement of the \({{\varvec{Q}}}_{{\varvec{x}}}\left({{\varvec{a}}}_{{\varvec{k}}},{{\varvec{q}}}_{{\varvec{l}}},{{\varvec{m}}}_{{\varvec{l}}}\right)\) tuple into two types, depending on the evaluation selection group parameter:

Per DQD identified as \({{\varvec{Q}}}_{{\varvec{x}}}\left({\varvec{A}}{\varvec{L}}{\varvec{i}}{\varvec{s}}{\varvec{t}}\left({{\varvec{a}}}_{{\varvec{z}}}\right),{{\varvec{q}}}_{{\varvec{l}}},{{\varvec{m}}}_{{\varvec{l}}}\right)\) where AList(a z ) represents the attributes \({{\varvec{a}}}_{{\varvec{z}}}\) ( z:1…R ) to be evaluated for the DQD \({{\varvec{q}}}_{{\varvec{l}}}\) .

Per attributes identified as Q x (a k , DList( \({{\varvec{q}}}_{{\varvec{l}}}\) , m l )) , where DList( \({{\varvec{q}}}_{{\varvec{l}}}\) , m l ) represents the data quality dimensions \({{\varvec{d}}}_{{\varvec{l}}}\) ( l:1… d ) to be evaluated for the attribute \({{\varvec{a}}}_{{\varvec{k}}}\) .

In some cases, the type of combination is automatically selected for a certain DQD, such as consistency, when all the attributes are constrained towards specific conditions. The combination is either based on attributes or DQD’s, and the DQES will be constructed as follows:

DQES ( \({{\varvec{Q}}}_{{\varvec{x}}}\left({\varvec{A}}{\varvec{L}}{\varvec{i}}{\varvec{s}}{\varvec{t}}\left({{\varvec{a}}}_{{\varvec{z}}}\right),{{\varvec{q}}}_{{\varvec{l}}},{{\varvec{m}}}_{{\varvec{l}}}\right)\) ,…,…) or.

DQES ( \({{\varvec{Q}}}_{{\varvec{x}}}\left({{\varvec{a}}}_{{\varvec{k}}},{\varvec{D}}{\varvec{L}}{\varvec{i}}{\varvec{s}}{\varvec{t}}({{\varvec{q}}}_{{\varvec{l}}},{{\varvec{m}}}_{{\varvec{l}}})\right)\) ,…,…)

The completion of the quality mapping process updates the DQP Level 2 with a DQES set as follows (Also illustrated in Error! Reference source not found.):

DQES ( \({{\varvec{Q}}}_{{\varvec{x}}}\left({{\varvec{a}}}_{{\varvec{k}}},{{\varvec{q}}}_{{\varvec{l}}},{{\varvec{m}}}_{{\varvec{l}}}\right)\) ,…,…) , where x ranges from 1 to a defined number of evaluation requests. Each Q x element is a quality evaluation request of an attribute \({{\varvec{a}}}_{{\varvec{k}}}\) for a quality dimension \({{\varvec{q}}}_{{\varvec{l}}}\) , with a DQD metric m l .

The output of this phase generates a DQES score, which contains the mean score for each DQ dimension for one or many attributes. The mapping and selection data flow initiated using Big Data quality project (BDQP) settings is illustrated in Fig.  9 . This is accomplished either using the same BDQP Req or defining more detailed and refined quality parameters and a sampling strategy. Two types of DQES can be yielded:

Data Quality Dimension-wise evaluation of a list of attributes or

Attribute-wise evaluation of many DQD’s. As described before, the quality mapping and selection component generates a DQES evaluation scheme for the dataset, identifying which DQD and attributes tuples to evaluate using a specific quality metric. Therefore, a more detailed and refined set of parameters can also be set, as described in previous sections. In the following, the steps that construct the DQES in the mapping component are depicted:

The QMS function extracts the Req parameters from BDQP as (D, L, A) .

A quality evaluation request \(\left({a}_{k},{q}_{l},{m}_{l}\right)\) , is generated from the (D, A) tuple.

A list is constructed with these quality evaluation requests.

A list sorting is performed either by DQD or by Attributes producing two types of lists:

A combination of requests per DQD generates quality requests for a set of attributes \(\left(AList\left({a}_{z}\right),{q}_{l},{m}_{l}\right)\) .

A combination of requests per attribute generates quality requests for a set of DQD’s \(\left({a}_{k},DList({q}_{l},{m}_{l})\right)\) .

A DQES is returned based on the evaluation selection group parameter (per DQD, per attribute).

figure 9

DQES parameters settings

Quantitative quality evaluation

The Authors in [ 66 ], addressed how to evaluate a set of DQDs over a set of attributes. According to this study, the evaluation of Big Data quality is applied and iterated to many samples. The aggregation and combination of DQD’s scores are performed after each iteration. The evaluation scores are added to the DQES, which results in updating the DQP. We proposed an algorithm, which computes the quality scores for a dataset based on a certain quality mapping and quality metrics.

This algorithm is based on quality metrics evaluation using scores after collecting and validating the scores with quality requirements and generating quality rules from these scores [ 66 , 67 ]. There are rules related to each pre-processing activity, such as data cleaning rules, which eliminate data, and data enrichment, which replaces or adds data. Other activities, such as data reduction, reduce the data size by decreasing the number of features or attributes that have certain characteristics such as low variance, and highly correlated features.

In this phase, all the information collected from previous components (profiling, mapping, DQES) is included in the data quality profile level 3. The important elements are the set of samples and the data quality evaluation scheme, which are executed on each sample to evaluate its quality attributes for a specific DQD.

DQP Level 3 provides all the information needed about the settings represented by the DQES to proceed with the quality evaluation. The DQES contains the following:

The selected DQDs and their related metrics.

The selected attributes with the DQD to be evaluated.

The DQD selection, which is based on the Big Data quality requirements expressed early when initiating a Big Data Quality Project.

Attributes selection is set in the quality selection mapping component (3).

The quantitative quality evaluation methodology is described as follows:

The selected DQD quality metrics will measure and evaluate the DQD for each attribute observation in each sample from the sample set. For each attribute observation, it returns a value 1, if correct, or 0, if incorrect.

Each metric will be computed if all the sample observations attribute values reflect the constraints. For example, the metric accuracy of an attribute defines that a range of values between 20 and 70 is valid. Otherwise, it is invalid. The count of correct values out of the total sample observations is the DQD ratio represented by a percentage (%). This is performed for all selected attributes and their selected DQDs.

The sample mean from all samples for each evaluated DQD represents a Data Quality Score (DQS) estimation \(\left(\overline{DQS }\right)\) of a data quality dimension of the data source.

DQP Level 4 : an update to the DQP level 3 includes a data quality evaluation scheme (DQES) with the quality scores per DQD and per attribute ( DQES  +  Scores ).

In summary, the quantitative quality evaluation starts with sampling, DQD’s and DQDs metrics selection, mapping with data attributes, quality measurements, and the sample mean DQD’s ratios.

Let us denote by \({{\varvec{Q}}}_{{\varvec{x}}}\) Score (quality score), the evaluation results of each quality evaluation request \({{\varvec{Q}}}_{{\varvec{x}}}\) in the DQES . Two types of DQES, depending on the evaluation type, which means two kind of results scores organized per DQD of all attributes or per attribute for all DQD’s, can be identified:

\({{\varvec{Q}}}_{{\varvec{x}}}\left({\varvec{A}}{\varvec{L}}{\varvec{i}}{\varvec{s}}{\varvec{t}}\left({{\varvec{a}}}_{{\varvec{z}}}\right),{{\varvec{q}}}_{{\varvec{l}}},{{\varvec{m}}}_{{\varvec{l}}}\right)\to\) \({{\varvec{Q}}}_{{\varvec{x}}}\) ScoreList \(\left({\varvec{A}}{\varvec{L}}{\varvec{i}}{\varvec{s}}{\varvec{t}}\left({{\varvec{a}}}_{{\varvec{z}}},{\varvec{S}}{\varvec{c}}{\varvec{o}}{\varvec{r}}{\varvec{e}}\right),{{\varvec{q}}}_{{\varvec{l}}},{{\varvec{m}}}_{{\varvec{l}}}\right)\) or.

\({{\varvec{Q}}}_{{\varvec{x}}}\left({{\varvec{a}}}_{{\varvec{z}}},{\varvec{D}}{\varvec{L}}{\varvec{i}}{\varvec{s}}{\varvec{t}}({{\varvec{q}}}_{{\varvec{l}}},{{\varvec{m}}}_{{\varvec{l}}})\right)\) \(\to\) Q x ScoreList \(\left({{\varvec{a}}}_{{\varvec{z}}},{\varvec{D}}{\varvec{L}}{\varvec{i}}{\varvec{s}}{\varvec{t}}\left({{\varvec{q}}}_{{\varvec{l}}},{{\varvec{m}}}_{{\varvec{l}}},{\varvec{S}}{\varvec{c}}{\varvec{o}}{\varvec{r}}{\varvec{e}}\right)\right)\)

where \({\varvec{z}}=1,\dots ,{\varvec{r}},\boldsymbol{ }{\varvec{r}}\) is the number of selected attributes, and \({\varvec{l}}=1,\dots ,{\varvec{d}},\) \({\varvec{d}}\) is the number of selected DQD’s.

The quality evaluation generates quality scores \({{\varvec{Q}}}_{{\varvec{x}}}\) Score . A quality scoring model is used to assess these results. It is provided in the form of quality requirements to comprehend the resulted scores, which are expressed as quality acceptance level percentages. These quality requirements might be a set of values, or an interval in which values are accepted or rejected, or a single score ratio percentage. The analysis of these scores against quality requirements leads to the discovery and generation of quality rules for attributes violating the quality requirements.

The quantitative quality evaluation process follows the steps described below for the case of the evaluation of a DQD’s list among several attributes ( \({{\varvec{Q}}}_{{\varvec{x}}}\left({{\varvec{a}}}_{{\varvec{z}}},{\varvec{D}}{\varvec{L}}{\varvec{i}}{\varvec{s}}{\varvec{t}}({{\varvec{q}}}_{{\varvec{l}}},{{\varvec{m}}}_{{\varvec{l}}})\right)\) ):

N samples (of size n ) are generated from the dataset DS using a BLB-based bootstrap sampling approach.

For each sample \({{\varvec{s}}}_{{\varvec{i}}}\) generated in step 1, and

For each \({{\varvec{a}}}_{{\varvec{z}}}\) ( \({\varvec{z}}=1,\dots ,{\varvec{r}}\) ) selected attribute in DQES in step 1, evaluate all the DQD’s in the DList using their related metrics to obtain Q x ScoreList \(\left({{\varvec{a}}}_{{\varvec{z}}},{\varvec{D}}{\varvec{L}}{\varvec{i}}{\varvec{s}}{\varvec{t}}\left({{\varvec{q}}}_{{\varvec{l}}},{{\varvec{m}}}_{{\varvec{l}}},{\varvec{S}}{\varvec{c}}{\varvec{o}}{\varvec{r}}{\varvec{e}}\right),{{\varvec{s}}}_{{\varvec{i}}}\right)\) for each sample \({{\varvec{s}}}_{{\varvec{i}}}\) .

For all the samples scores, evaluate the sample mean of all N samples for each attribute \({{\varvec{a}}}_{{\varvec{z}}}\) related to the \({{\varvec{q}}}_{{\varvec{l}}}\) evaluation scores, as \(\stackrel{-}{{\overline{{\varvec{q}}} }_{{\varvec{z}}{\varvec{l}}}}.\)

For the dataset DS , evaluate the quality score mean \({\overline{{\varvec{q}}} }_{{\varvec{l}}}\) for each DQD for all attributes \({{\varvec{a}}}_{{\varvec{z}}}\) , as follows:

The illustration in Fig.  10 shows that the \({{\varvec{q}}}_{{\varvec{z}}{\varvec{l}}}{{\varvec{s}}}_{{\varvec{i}}}{\varvec{S}}{\varvec{c}}{\varvec{o}}{\varvec{r}}{\varvec{e}}\) is the evaluation of DQD \({{\varvec{q}}}_{{\varvec{l}}}\) for the sample \({{\varvec{s}}}_{{\varvec{i}}}\) for an attribute \({{\varvec{a}}}_{{\varvec{z}}}\) with a metric m l \(\boldsymbol{ }{\overline{{\varvec{q}}} }_{{\varvec{z}}{\varvec{l}}}\) represents the quality score sample mean for the attributes \({{\varvec{a}}}_{{\varvec{z}}}\) .

figure 10

Big data sampling and quantitative quality evaluation

Quality control

The quality control is initiated when the quality evaluation results are available and reported in the DQES in DQP Level 4 . During quality control, all the quality scores with the quality requirements of the Big Data project are checked. If any detected anomalies or a non-conformance are found, the quality control component forwards a DQP Level 5 to the data quality rules discovery component.

At this point, various cases are highlighted. An iteration process is performed until the required quality levels are satisfied, or the experts decide to stop the quality evaluation process and re-evaluate their requirements. At each phase, there is a kind of quality control, even if it is not explicitly specified, within each quality process.

The quality control acts in the following cases:

Case 1: This case applies when the quality is estimated, and no rules are yet included in the DQP Level 4 (the DQP is considered as a report, since the data quality is still inspected, and only reports are generated with no actions yet to be performed).

In the case of accepted quality scores, no quality actions need to be applied to data. The DQP Level 4 remains unchanged and acts as a full data quality report, which is updated with positive validation of the data per quality requirement. However, it might include some simple pre-processing such as attribute selection and filtering. According to the data analytics requirements and expected results planned in the Big Data project, more specific data pre-processing actions are performed but not related to quality in this case.

In the case when quality scores are not accepted, the DQP Level 4 DQES scores are analyzed, and the DQP is updated with a quality error report about the related DQD scores and their data attributes. DQP Level 5 is created, and it will be analyzed by the quality rules discovery component for the pre-processing activities to be executed on the data.

Case 2: In the presence of a DQP Level 6 that contains a quality evaluation request of the pre-processed samples with discovered quality rules, the following situations may occur:

When the quality control checks that the DQP Level 6 rules are valid and satisfy the quality requirements, the DQP Level 6 is updated to DQP Level 7 and confirmed as the final data quality profile, which will be applied to the data in the pre-processing phase. DQP Level 7 is considered as important if it contains validated quality rules.

When the quality control is not totally or partially satisfied, the DQP Level 6 is sent back for an adaptation of the quality selection and mapping component with valid and invalid quality rules, quality scores, and error reports. These reports highlight with an unacceptable score interval the quality rules that have not satisfied the quality requirements. The quality selection and mapping component provide automatic or manual analysis and assessment of the unsatisfied quality rules concerning their targeted DQD’s, attributes, and quality requirements. An adaptation of quality requirements is needed to re-validate these rules. Finally, the user experts have the final word to continue or break the process and proceed to the pre-processing phase with the valid rules. As part of the framework reuse specification, the invalid rules are kept within the DQP for future re-evaluation.

Case 3: The control component will always proceed based on the quality scores and quality requirements for both input and pre-processed data. Continuous control and monitoring are responsible for initiating DQP updates and adaptation if the quality requirements are relaxed.

Quality rules, discovery, validation, optimization, and execution

In [ 67 ] work, it was reported that if the DQD scores do not conform to the quality requirements, then failed scores are used to discover data quality rules. When executed on data, these rules enhance its quality. They are based on known pre-processing activities such as data cleansing. Each activity has a set of functions targeting different types of data in order to increase its DQD ratio and the whole Data Quality (of the Data source or the Dataset(s)).

When Quality Rules ( QR) are applied to a sample set S , a pre-processed sample set S’ is generated. A quality evaluation process is invoked on S’ , generating DQD scores for S’ . Thus, a score comparison between S and S’ is conducted to filter only qualified and valid rules with a higher percentage of success among data. Then, an optimization scheme is applied to the list of valid quality rules before their application on production data. The predefined optimization schemes vary from (1) rules priority to (2) rules redundancy, (3) rules removal, (4) rules grouping per attribute, or (5) per DQD’s, or (6) per duplicate rules.

Quality rules discovery: The discovery is based on the DQP Level 5 from the quality control component. An analysis of the quality scores is initiated, and an error report is extracted. If the DQD scores do not conform to the quality requirements, then failed scores are used to discover data quality rules. When executed on data, these rules enhance its quality. They are based on known pre-processing activities such as data cleansing. Error! Reference source not found. illustrates the several modules of the discovery component from DQES DQDs scores analysis versus requirements, attributes pre-processing activities combination for each targeted DQD, and the rules generation.

For example, an attribute having a 50% score of missing data is not accepted for a required score of 20% or less. This initiates the generation of a quality rule, which consists of a data cleansing activity for observations that do not satisfy the quality requirements. The data cleansing or data enrichment activity is selected from the Big Data quality profile repository. The quality rule will target all the related attributes marked for pre-processing to reduce the 50% to 20% for the DQD completeness. Moreover, in the case of completeness, not only cleansing can be applied to missing values, but many alternatives are available for pre-processing activities. These activities are related to completeness such as missing values replacement activity with many functions for several replacements’ methods like the mean, mode, and the median.

The pre-processing activities are provided by the repository to achieve the required data quality. Many possibilities for pre-processing activities selection are available:

Automatic , by discovering and suggesting a set of activities or DQ rules.

Predefined , by selecting ready-to-use quality rules proposals from the exploratory quality profiling component, predefined pre-processing activity functions from the repository, indexed by DQDs.

Manual, giving the expert the ability to query the exploratory quality profiling results for the best rules, achieving the required quality using KNN-based filtering.

Quality rules validation: The generated quality rules from the discovery components are set in the DQP Level 6. As illustrated in Error! Reference source not found., the rules validation component process starts when the DQR list is applied to the sample set S , resulting in a pre-processed sample set S’ , which is generated by the related pre-processing activities. Then, a quality evaluation process is invoked on S’ , generating DQD scores for S’ . Thus, a score comparison between S and S’ is conducted to filter only qualified and valid rules with a higher percentage of success among data. After analyzing this score, two sets of rules are identified: successful and failed rules.

Quality rules optimization: After the set of discovered valid quality rules is selected, an optimization process is activated to reorganize and filter the rules. This is due to the nature of the evaluation parameters set in the mapping component and the refinement of the quality requirement. These choices with the rule’s validation process will produce a list of individual quality rules that, if applied as generated, might have the following consequences:

Redundant rules.

Ineffective rules due to the order of execution.

Multiple rules, which target the same DQD with the same requirements.

Multiple rules, which target the same attributes for the same DQD and requirements.

Rules, which drop attributes or rows, must be applied first or have a higher priority to avoid applying rules on data items that are meant to be dropped (Table 8 ).

The quality rules optimization component applies an optimization scheme to the list of valid quality rules before their application to production data in the pre-processing phase. The predefined optimization schemes vary according to the following, as illustrated in Error! Reference source not found.:

Rules execution priority per attribute or DQD, per pre-processing activity, or pre-processing function.

Rules redundancy removal per attributes or DQDs.

Rules grouping, combination, per activity, per attribute, per DQD’s, or duplicates.

For invalid rules, the component consists of several actions, including rules removal or rules adaptation from previously generated proposals in the exploratory quality profiling component for the same targeted tuple (attributes, DQDs).

Quality rules optimization: The Quality Rules execution consists of pre-processing data using the DQP, which embeds the data quality rules that enhance the quality to reach the agreed requirements. As part of the monitoring module, a sampling set from the pre-processed data is used to re-assess the quality and detect eventual failures.

Quality monitoring

Quality Monitoring is a continuous quality control process, which relies on the DQP. The purpose of monitoring is to validate the DQP across all the Big Data lifecycle processes. The QP repository is updated during and after the complete lifecycle as well as after the user’s feedback data, quality requirements, and mapping.

As illustrated in Fig.  11 , the monitoring process takes a scheduled snapshot of the pre-processed Big Data all along the BDQMF for the BDQ project. This data snapshot is a set of samples that have their quality evaluated in the BDQMF component (4). Then, quality control is conducted on the quality scores, and an update is performed to the DQP. The quality report may highlight the quality failure and its ratio evolution through multiple sampling snapshots of data.

figure 11

Quality monitoring component

The monitoring process strengthens and enforces the quality across the Big Data value chain using the BDQM framework while reusing the data quality profile information. For each quality monitoring iteration on the datasets from the data source, quality reports are added to the data quality profile, updating it to a DQP Level 10 .

Data processing, analytics, and visualization

This process involves the application of algorithms or methodologies, which extract insights from the ready-to-use data, with enhanced quality. Then, the value of processed data is projected visually as a dashboard and graphically enhanced charts for the decision-makers to act economically. Big Data visualization approaches are of high importance for the final exploitation of the data.

Implementations: Dataflow and quality processes development

In this section, we overview the dataflow across the various processes of the framework, we also highlight the implemented quality management processes along with the supporting application interfaces developed to support main processes. Finally, we describe the ongoing processes’ implementations and evaluations.

Framework dataflow

In Fig.  12 , we illustrate the whole process flow of the framework, from the inception of the quality project in its specification and requirements to the quality monitoring phase. As an ongoing process, monitoring is a part of the quality enforcement loop and may trigger other processes that handle several quality profile operations like DQP adaptation, upgrade, or reuse.

figure 12

Big data quality management framework data flow

In Table 9 , we enumerate and detail the multiple processes and their interactions within the BDQM Framework components including their inputs and outputs after executing related activities with the quality profile (DQP), as detailed in the previous section.

Quality management processes’ implementation

In this section, we describe the implementation of our framework's important components, processes, and their contributions towards the quality management of Big Data across its lifecycle.

Core processes implementation

As depicted above, core framework processes have been implemented and evaluated, in the following, we describe how these components have been implemented and evaluated.

Quality profiling : one of the central components of our framework is the data quality profile (DQP). Initially, the DQP implements a simple data profile of a Big Data set as an XML file (DQP Sample illustrated in Fig.  13 ).

figure 13

Example of data quality profile

After traversing several framework component’s processes, it is updated to a data quality profile. The data quality evaluation process is one of the activities that updates the DQP with quality scores that are later used to discover data quality rules. These rules, when applied to the original data, will ensure an output data set with higher quality. The DQP is finally executed by the pre-processing component. Through the end of the lifecycle, the DQP contains all pieces of information such as data quality rules that target a set of data sources with multiple datasets, data attributes and data quality dimensions such as accuracy, and pre-processing activities like data cleansing, data integration, and data normalization. The Data Quality Profile (DQP) contains all the information about the Data, its Quality, the User Quality Requirements, DQD’s, Quality Levels, Attributes, the Data Quality Evaluation Scheme (DQES), Quality Scores, and the Data Quality Rules. The DQP is stored in the DQP repository, which contains the following modules, and performs many tasks related to DQP. In the following, the DQP lifecycle and its repository are described.

Quality requirement dashboard : developed as a web-based application as shown in Fig.  14 below to capture user’s requirements and other quality information. Such requirements include for instance data quality dimension requirements specification. This application can be extended with extra information about data sources such as attributes and their types. The user is guided through the interface to specify the right attributes’ values and also given the option to upload an XML file containing the relationship between attributes. The recorded requirements are finally saved to a data quality profile level 0 which will be used in the next stage of the quality management process.

figure 14

Quality requirements dashboard

Data preparation and sampling : The framework operations start when the quality project's minimal specifications are set. It initiates and provides a data quality summary named data quality profile (DQP) by running an exploratory quality profiling assessment on data samples (using BLB sampling algorithm). This DQP is projected to be the core component of the framework and every update and every result regarding the quality is noted/recorded. The DQP is stored in a quality repository and registered in the Big Data’s provenance to keep track of data changes due to quality enhancements.

Data quality mapping and rule discovery components : data quality mapping alleviates and adds more data quality control to the whole data quality assessment process. The implemented mapping links and categorizes all the quality project required elements, from Big Data quality characteristics, pre-processing activities, and their related techniques functions, to data quality rules, dimensions, and their metrics. The Data Quality Rules’ discovery from evaluation results implementation reveals the required actions and transformations that when applied on the data set will accomplish the targeted quality level. These rules are the main ingredients of pre-processing activities. The role of a DQ rule is to undertake the sources of bad quality by defining a list of actions related to each quality score. The DQ rules are the results of systematic and planned data quality assessment analysis.

Quality profile repository (QPREPO) : Finally, our framework implements the QPREPO to manage the data quality profiles for different data types and domains and to adapt or optimize existing profiles. This repository manages the data quality dimensions with their related metrics, and the pre-processing activities, and their activity functions. A QPREPO entry is implemented for each Big Data quality project with the related DQP containing information’s about each dataset, data source, data domain, and data user. This information is essential for DQP reuse, adaptation, and enhancement for the same or different data sources.

Implemented approaches for quality assessment.

The framework uses various approaches for quality assessment: (1) Exploratory Quality Profiling; (2) a Quantitative Quality Assessment approach using DQD metrics; and it's anticipated to add a new component for (3) a Qualitative quality assessment.

Exploratory Quality Profiling implements an automatic quality evaluation that is done systematically on all data attributes for basic DQDs. The resulted in calculated scores are used to generate quality rules for each quality tolerance ratio variation. These rules are then applied to other data samples and the quality is reassessed. An analysis of the results provides an interactive quality-based rules search using several ranking algorithms (maximization, minimization, applying weight).

The Quantitative Quality Assessment implements a quick data quality evaluation strategy supported through sampling and profiling processes for Big Data. The evaluation is conducted by measuring the data quality dimensions (DQDs) for attributes using specific metrics to calculate a quality score.

The Qualitative Quality Assessment approach implements a deep quality assessment to discover hidden quality aspects and their impact on the Big Data Lifecycle outputs. These quality aspects must be quantified into scores and mapped with related attributes and DQD’s. This quantification is achieved by applying several feature selection strategies and algorithms to data samples. These qualitative insights are combined with those obtained before the quantitative quality evaluation early in the Quality management process.

Framework development, deployment, and evaluation

Development, deployment, and evaluation of our BDQMF framework follow a systematic modular approach where various components of the framework are developed and tested independently then integrated with the other components to compose the integrated solution. Most of the components are implemented in R and |Python using SparkR and PySpark libraries respectively. The supporting files like the DQP, DQES, and configuration files are written in XML and JSON formats. Big Data quality project requests and constraints including the data sources and the quality expectation are implemented within the solution where more than one module might be involved. The BDQMF components are deployed following Apache Hadoop and Spark ecosystem architecture.

The BDQMF deployed modules implementation description and developed APIs are listed in the following:

Quality setting mapper (QSP): it implements an interface for automatic selection and mapping of DQD’s and dataset attributes from the initial DQP.

Quality settings parser (QSP): responsible for parsing and loading parameters to the execution environment from DQP settings to data files. It is also used to extract quality rules and scores from the DQES in the DQP.

Data loader (DL): implements filtering, selecting, and loading all types of data files required by the BDQMF including datasets from data sources into the Spark environment (e.g. DataFrames, tables), it will be used by various processes or it will persist in the database for further reuse. For data selection the uses SQL to retrieve only attributes being set in the DQP settings.

Data samples generator (DSG): it generates data samples from multiple data sources.

Quality inspector and profiler (QIP): it is responsible for all qualitative and quantitative quality evaluations among data samples for all the BDQMF lifecycle phases. The inspector assesses all the default and required DQD’s, and all quality evaluations are set into the DQES within the DQP file.

Preprocessing activities and functions execution engine (PPAF-E ): all the repository preprocessing activities along with their related functions are implemented as APIs in python and R. When requested this library will load the necessary methods and execute them within the preprocessing activities for rules validation and rules execution in phase 9.

Quality rules manager (QRM): it is one of the important modules of the framework. It implements and deliver the following features:

Analyzes Quality results

Discovers and generates Quality rules proposals.

Quality rules validation among requirements settings.

Quality rules refinement and optimizations

Quality rules ACID operations in the DQP files and the repository.

Quality monitor (QM) : it is responsible for monitoring, triggering, and reporting any quality change all over the Big Data lifecycle to assure the efficiency of quality improvement of the discovered data quality rules.

BDQMF-Repo: is the repository where all the quality-related files, settings, requirements, results are stored. The repo is using HBase or Mongo DB to fulfill requirements of the Big Data ecosystem environments and scalability for intensive data updates.

Big data quality has attracted the attention of researchers regarding Big Data as it is considered the key differentiator, which leads to high-quality insights and data-driven decisions. In this paper, a Big Data Quality Management Framework for addressing end-to-end Quality in the Big Data lifecycle was proposed. The framework is based on a Data Quality Profile, which is augmented with valuable information while traveling across different stages of the framework, starting from Big Data project parameters, quality requirements, quality profiling, and quality rules proposals. The exploratory quality profiling feature, which extracts quality information from the data, helped in building a robust DQP with a quality rules proposal and a step over for the configuration of the data quality evaluation scheme. Moreover, the extracted quality rules proposals are of high benefit for the quality dimensions mapping and attribute selection component. This fact supports the users with quality data indicators characterized by their profile.

The framework dataflow shows that any Big Data set quality is evaluated through the exploratory quality profiling component and the quality rules extraction and validation towards an improvement in its quality. It is of great importance to ensure the right selection of a combination of targeted DQD levels, observations (rows), and attributes (columns) for efficient quality results, while not sacrificing vital data because of considering only one DQD. The resulted quality profile based on the quality assessment results confirms that the contained quality information significantly improves the quality of Big Data.

In future work, we plan to extend the quantitative quality profiling with qualitative evaluation. We also plan to extend the framework to cope with unstructured Big Data quality assessment.

Availability of data and materials

Data used in this work is available with the first author and can be provided up on request. The data includes sampling data, pre-processed data, etc.

Chen M, Mao S, Liu Y. Big data: A survey. Mobile Netw Appl. 2014;19:171–209. https://doi.org/10.1007/s11036-013-0489-0 .

Article   Google Scholar  

Chiang F, Miller RJ. Discovering data quality rules. Proceed VLDB Endowment. 2008;1:1166–77.

Yeh, P.Z., Puri, C.A., 2010. An Efficient and Robust Approach for Discovering Data Quality Rules, in: 2010 22nd IEEE International Conference on Tools with Artificial Intelligence (ICTAI). Presented at the 2010 22nd IEEE International Conference on Tools with Artificial Intelligence (ICTAI), pp. 248–255. https://doi.org/10.1109/ICTAI.2010.43

Ciancarini, P., Poggi, F., Russo, D., 2016. Big Data Quality: A Roadmap for Open Data, in: 2016 IEEE Second International Conference on Big Data Computing Service and Applications (BigDataService). Presented at the 2016 IEEE Second International Conference on Big Data Computing Service and Applications (BigDataService), pp. 210–215. https://doi.org/10.1109/BigDataService.2016.37

Firmani D, Mecella M, Scannapieco M, Batini C. On the meaningfulness of “big data quality” (Invited Paper). Data Sci Eng. 2016;1:6–20. https://doi.org/10.1007/s41019-015-0004-7 .

Rivas, B., Merino, J., Serrano, M., Caballero, I., Piattini, M., 2015. I8K|DQ-BigData: I8K Architecture Extension for Data Quality in Big Data, in: Advances in Conceptual Modeling, Lecture Notes in Computer Science. Presented at the International Conference on Conceptual Modeling, Springer, Cham, pp. 164–172. https://doi.org/10.1007/978-3-319-25747-1_17

Manyika, J., Chui, M., Brown, B., Bughin, J., Dobbs, R., Roxburgh, C., Byers, A.H., 2011. Big data: The next frontier for innovation, competition, and productivity. McKinsey Global Institute 1–137.

Chen CP, Zhang C-Y. Data-intensive applications, challenges, techniques and technologies: A survey on Big Data. Inf Sci. 2014;275:314–47.

Hashem IAT, Yaqoob I, Anuar NB, Mokhtar S, Gani A, Ullah Khan S. The rise of “big data” on cloud computing: Review and open research issues. Inf Syst. 2015;47:98–115. https://doi.org/10.1016/j.is.2014.07.006 .

Hu H, Wen Y, Chua T-S, Li X. Toward scalable systems for big data analytics: a technology tutorial. IEEE Access. 2014;2:652–87. https://doi.org/10.1109/ACCESS.2014.2332453 .

Wielki J. The Opportunities and Challenges Connected with Implementation of the Big Data Concept. In: Mach-Król M, Olszak CM, Pełech-Pilichowski T, editors. Advances in ICT for Business. Springer International Publishing: Industry and Public Sector, Studies in Computational Intelligence; 2015. p. 171–89.

Google Scholar  

Ali-ud-din Khan, M., Uddin, M.F., Gupta, N., 2014. Seven V’s of Big Data understanding Big Data to extract value, in: American Society for Engineering Education (ASEE Zone 1), 2014 Zone 1 Conference of The. Presented at the American Society for Engineering Education (ASEE Zone 1), 2014 Zone 1 Conference of the, pp. 1–5. https://doi.org/10.1109/ASEEZone1.2014.6820689

Kepner, J., Gadepally, V., Michaleas, P., Schear, N., Varia, M., Yerukhimovich, A., Cunningham, R.K., 2014. Computing on masked data: a high performance method for improving big data veracity, in: 2014 IEEE High Performance Extreme Computing Conference (HPEC). Presented at the 2014 IEEE High Performance Extreme Computing Conference (HPEC), pp. 1–6. https://doi.org/10.1109/HPEC.2014.7040946

Saha, B., Srivastava, D., 2014. Data quality: The other face of Big Data, in: 2014 IEEE 30th International Conference on Data Engineering (ICDE). Presented at the 2014 IEEE 30th International Conference on Data Engineering (ICDE), pp. 1294–1297. https://doi.org/10.1109/ICDE.2014.6816764

Gandomi A, Haider M. Beyond the hype: Big data concepts, methods, and analytics. Int J Inf Manage. 2015;35:137–44.

Pääkkönen P, Pakkala D. Reference architecture and classification of technologies, products and services for big data systems. Big Data Research. 2015;2:166–86. https://doi.org/10.1016/j.bdr.2015.01.001 .

Oliveira, P., Rodrigues, F., Henriques, P.R., 2005. A Formal Definition of Data Quality Problems., in: IQ.

Maier, M., Serebrenik, A., Vanderfeesten, I.T.P., 2013. Towards a Big Data Reference Architecture. University of Eindhoven.

Caballero, I., Piattini, M., 2003. CALDEA: a data quality model based on maturity levels, in: Third International Conference on Quality Software, 2003. Proceedings. Presented at the Third International Conference on Quality Software, 2003. Proceedings, pp. 380–387. https://doi.org/10.1109/QSIC.2003.1319125

Sidi, F., Shariat Panahy, P.H., Affendey, L.S., Jabar, M.A., Ibrahim, H., Mustapha, A., 2012. Data quality: A survey of data quality dimensions, in: 2012 International Conference on Information Retrieval Knowledge Management (CAMP). Presented at the 2012 International Conference on Information Retrieval Knowledge Management (CAMP), pp. 300–304. https://doi.org/10.1109/InfRKM.2012.6204995

Chen, M., Song, M., Han, J., Haihong, E., 2012. Survey on data quality, in: 2012 World Congress on Information and Communication Technologies (WICT). Presented at the 2012 World Congress on Information and Communication Technologies (WICT), pp. 1009–1013. https://doi.org/10.1109/WICT.2012.6409222

Batini C, Cappiello C, Francalanci C, Maurino A. Methodologies for data quality assessment and improvement. ACM Comput Surv. 2009;41:1–52. https://doi.org/10.1145/1541880.1541883 .

Glowalla, P., Balazy, P., Basten, D., Sunyaev, A., 2014. Process-Driven Data Quality Management–An Application of the Combined Conceptual Life Cycle Model, in: 2014 47th Hawaii International Conference on System Sciences (HICSS). Presented at the 2014 47th Hawaii International Conference on System Sciences (HICSS), pp. 4700–4709. https://doi.org/10.1109/HICSS.2014.575

Wand Y, Wang RY. Anchoring data quality dimensions in ontological foundations. Commun ACM. 1996;39:86–95. https://doi.org/10.1145/240455.240479 .

Wang, R.Y., Strong, D.M., 1996. Beyond accuracy: What data quality means to data consumers. Journal of management information systems 5–33.

Cappiello, C., Caro, A., Rodriguez, A., Caballero, I., 2013. An Approach To Design Business Processes Addressing Data Quality Issues.

Hazen BT, Boone CA, Ezell JD, Jones-Farmer LA. Data quality for data science, predictive analytics, and big data in supply chain management: An introduction to the problem and suggestions for research and applications. Int J Prod Econ. 2014;154:72–80. https://doi.org/10.1016/j.ijpe.2014.04.018 .

Caballero, I., Verbo, E., Calero, C., Piattini, M., 2007. A Data Quality Measurement Information Model Based On ISO/IEC 15939., in: ICIQ. pp. 393–408.

Juddoo, S., 2015. Overview of data quality challenges in the context of Big Data, in: 2015 International Conference on Computing, Communication and Security (ICCCS). Presented at the 2015 International Conference on Computing, Communication and Security (ICCCS), pp. 1–9. https://doi.org/10.1109/CCCS.2015.7374131

Woodall P, Borek A, Parlikad AK. Data quality assessment: The hybrid approach. Inf Manage. 2013;50:369–82. https://doi.org/10.1016/j.im.2013.05.009 .

Goasdoué, V., Nugier, S., Duquennoy, D., Laboisse, B., 2007. An Evaluation Framework For Data Quality Tools., in: ICIQ. pp. 280–294.

Caballero, I., Serrano, M., Piattini, M., 2014. A Data Quality in Use Model for Big Data, in: Indulska, M., Purao, S. (Eds.), Advances in Conceptual Modeling, Lecture Notes in Computer Science. Springer International Publishing, pp. 65–74. https://doi.org/10.1007/978-3-319-12256-4_7

Cai L, Zhu Y. The challenges of data quality and data quality assessment in the big data era. Data Sci J. 2015. https://doi.org/10.5334/dsj-2015-002 .

Philip Woodall, A.B., 2014. An Investigation of How Data Quality is Affected by Dataset Size in the Context of Big Data Analytics.

Laranjeiro, N., Soydemir, S.N., Bernardino, J., 2015. A Survey on Data Quality: Classifying Poor Data, in: 2015 IEEE 21st Pacific Rim International Symposium on Dependable Computing (PRDC). Presented at the 2015 IEEE 21st Pacific Rim International Symposium on Dependable Computing (PRDC), pp. 179–188. https://doi.org/10.1109/PRDC.2015.41

Liu, J., Li, J., Li, W., Wu, J., 2016. Rethinking big data: A review on the data quality and usage issues. ISPRS Journal of Photogrammetry and Remote Sensing, Theme issue “State-of-the-art in photogrammetry, remote sensing and spatial information science” 115, 134–142. https://doi.org/10.1016/j.isprsjprs.2015.11.006

Rao, D., Gudivada, V.N., Raghavan, V.V., 2015. Data quality issues in big data, in: 2015 IEEE International Conference on Big Data (Big Data). Presented at the 2015 IEEE International Conference on Big Data (Big Data), pp. 2654–2660. https://doi.org/10.1109/BigData.2015.7364065

Zhou, H., Lou, J.G., Zhang, H., Lin, H., Lin, H., Qin, T., 2015. An Empirical Study on Quality Issues of Production Big Data Platform, in: 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering (ICSE). Presented at the 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering (ICSE), pp. 17–26. https://doi.org/10.1109/ICSE.2015.130

Becker, D., King, T.D., McMullen, B., 2015. Big data, big data quality problem, in: 2015 IEEE International Conference on Big Data (Big Data). Presented at the 2015 IEEE International Conference on Big Data (Big Data), IEEE, Santa Clara, CA, USA, pp. 2644–2653. https://doi.org/10.1109/BigData.2015.7364064

Maślankowski, J., 2014. Data Quality Issues Concerning Statistical Data Gathering Supported by Big Data Technology, in: Kozielski, S., Mrozek, D., Kasprowski, P., Małysiak-Mrozek, B., Kostrzewa, D. (Eds.), Beyond Databases, Architectures, and Structures, Communications in Computer and Information Science. Springer International Publishing, pp. 92–101. https://doi.org/10.1007/978-3-319-06932-6_10

Fürber, C., Hepp, M., 2011. Towards a Vocabulary for Data Quality Management in Semantic Web Architectures, in: Proceedings of the 1st International Workshop on Linked Web Data Management, LWDM ’11. ACM, New York, NY, USA, pp. 1–8. https://doi.org/10.1145/1966901.1966903

Corrales DC, Corrales JC, Ledezma A. How to address the data quality issues in regression models: a guided process for data cleaning. Symmetry. 2018;10:99.

Fan, W., 2008. Dependencies revisited for improving data quality, in: Proceedings of the Twenty-Seventh ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems. ACM, pp. 159–170.

Kläs, M., Putz, W., Lutz, T., 2016. Quality Evaluation for Big Data: A Scalable Assessment Approach and First Evaluation Results, in: 2016 Joint Conference of the International Workshop on Software Measurement and the International Conference on Software Process and Product Measurement (IWSM-MENSURA). Presented at the 2016 Joint Conference of the International Workshop on Software Measurement and the International Conference on Software Process and Product Measurement (IWSM-MENSURA), pp. 115–124. https://doi.org/10.1109/IWSM-Mensura.2016.026

Rahm E, Do HH. Data cleaning: Problems and current approaches. IEEE Data Eng Bull. 2000;23:3–13.

Dallachiesa, M., Ebaid, A., Eldawy, A., Elmagarmid, A., Ilyas, I.F., Ouzzani, M., Tang, N., 2013. NADEEF: A Commodity Data Cleaning System, in: Proceedings of the 2013 ACM SIGMOD International Conference on Management of Data, SIGMOD ’13. ACM, New York, NY, USA, pp. 541–552. https://doi.org/10.1145/2463676.2465327

Ebaid A, Elmagarmid A, Ilyas IF, Ouzzani M, Quiane-Ruiz J-A, Tang N, Yin S. NADEEF: A generalized data cleaning system. Proceed VLDB Endowment. 2013;6:1218–21.

Elmagarmid, A., Ilyas, I.F., Ouzzani, M., Quiané-Ruiz, J.-A., Tang, N., Yin, S., 2014. NADEEF/ER: generic and interactive entity resolution. ACM Press, pp. 1071–1074. https://doi.org/10.1145/2588555.2594511

Tang N. Big Data Cleaning. In: Chen L, Jia Y, Sellis T, Liu G, editors. Web Technologies and Applications. Lecture Notes in Computer Science: Springer International Publishing; 2014. p. 13–24.

Chapter   Google Scholar  

Ge M, Dohnal V. Quality management in big data informatics. 2018;5:19. https://doi.org/10.3390/informatics5020019 .

Jimenez-Marquez JL, Gonzalez-Carrasco I, Lopez-Cuadrado JL, Ruiz-Mezcua B. Towards a big data framework for analyzing social media content. Int J Inf Manage. 2019;44:1–12. https://doi.org/10.1016/j.ijinfomgt.2018.09.003 .

Siddiqa A, Hashem IAT, Yaqoob I, Marjani M, Shamshirband S, Gani A, Nasaruddin F. A survey of big data management: Taxonomy and state-of-the-art. J Netw Comput Appl. 2016;71:151–66. https://doi.org/10.1016/j.jnca.2016.04.008 .

Taleb, I., Dssouli, R., Serhani, M.A., 2015. Big Data Pre-processing: A Quality Framework, in: 2015 IEEE International Congress on Big Data (BigData Congress). Presented at the 2015 IEEE International Congress on Big Data (BigData Congress), pp. 191–198. https://doi.org/10.1109/BigDataCongress.2015.35

Cormode, G., Duffield, N., 2014. Sampling for Big Data: A Tutorial, in: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’14. ACM, New York, NY, USA, pp. 1975–1975. https://doi.org/10.1145/2623330.2630811

Gadepally, V., Herr, T., Johnson, L., Milechin, L., Milosavljevic, M., Miller, B.A., 2015. Sampling operations on big data, in: 2015 49th Asilomar Conference on Signals, Systems and Computers. Presented at the 2015 49th Asilomar Conference on Signals, Systems and Computers, pp. 1515–1519. https://doi.org/10.1109/ACSSC.2015.7421398

Liang F, Kim J, Song Q. A bootstrap metropolis-hastings algorithm for bayesian analysis of big data. Technometrics. 2016. https://doi.org/10.1080/00401706.2016.1142905 .

Article   MathSciNet   Google Scholar  

Satyanarayana, A., 2014. Intelligent sampling for big data using bootstrap sampling and chebyshev inequality, in: 2014 IEEE 27th Canadian Conference on Electrical and Computer Engineering (CCECE). Presented at the 2014 IEEE 27th Canadian Conference on Electrical and Computer Engineering (CCECE), IEEE, Toronto, ON, Canada, pp. 1–6. https://doi.org/10.1109/CCECE.2014.6901029

Kleiner, A., Talwalkar, A., Sarkar, P., Jordan, M., 2012. The big data bootstrap. arXiv preprint

Dai, W., Wardlaw, I., Cui, Y., Mehdi, K., Li, Y., Long, J., 2016. Data Profiling Technology of Data Governance Regarding Big Data: Review and Rethinking, in: Latifi, S. (Ed.), Information Technolog: New Generations. Springer International Publishing, Cham, pp. 439–450. https://doi.org/10.1007/978-3-319-32467-8_39

Loshin, D., 2010. Rapid Data Quality Assessment Using Data Profiling 15.

Naumann F. Data profiling revisited. ACM. SIGMOD Record. 2014;42:40–9.

Buneman, P., Davidson, S.B., 2010. Data provenance–the foundation of data quality.

Glavic, B., 2014. Big Data Provenance: Challenges and Implications for Benchmarking, in: Specifying Big Data Benchmarks. Springer, pp. 72–80.

Wang, J., Crawl, D., Purawat, S., Nguyen, M., Altintas, I., 2015. Big data provenance: Challenges, state of the art and opportunities, in: 2015 IEEE International Conference on Big Data (Big Data). Presented at the 2015 IEEE International Conference on Big Data (Big Data), pp. 2509–2516. https://doi.org/10.1109/BigData.2015.7364047

Hwang W-J, Wen K-W. Fast kNN classification algorithm based on partial distance search. Electron Lett. 1998;34:2062–3.

Taleb, I., Kassabi, H.T.E., Serhani, M.A., Dssouli, R., Bouhaddioui, C., 2016. Big Data Quality: A Quality Dimensions Evaluation, in: 2016 Intl IEEE Conferences on Ubiquitous Intelligence Computing, Advanced and Trusted Computing, Scalable Computing and Communications, Cloud and Big Data Computing, Internet of People, and Smart World Congress (UIC/ATC/ScalCom/CBDCom/IoP/SmartWorld). Presented at the 2016 Intl IEEE Conferences on Ubiquitous Intelligence Computing, Advanced and Trusted Computing, Scalable Computing and Communications, Cloud and Big Data Computing, Internet of People, and Smart World Congress (UIC/ATC/ScalCom/CBDCom/IoP/SmartWorld), pp. 759–765. https://doi.org/10.1109/UIC-ATC-ScalCom-CBDCom-IoP-SmartWorld.2016.0122

Taleb, I., Serhani, M.A., 2017. Big Data Pre-Processing: Closing the Data Quality Enforcement Loop, in: 2017 IEEE International Congress on Big Data (BigData Congress). Presented at the 2017 IEEE International Congress on Big Data (BigData Congress), pp. 498–501. https://doi.org/10.1109/BigDataCongress.2017.73

Deng, Z., Zhu, X., Cheng, D., Zong, M., Zhang, S., n.d. Efficient kNN classification algorithm for big data. Neurocomputing. https://doi.org/10.1016/j.neucom.2015.08.112

Firmani, D., Mecella, M., Scannapieco, M., Batini, C., 2015. On the Meaningfulness of “Big Data Quality” (Invited Paper), in: Data Science and Engineering. Springer Berlin Heidelberg, pp. 1–15. https://doi.org/10.1007/s41019-015-0004-7

Lee YW. Crafting rules: context-reflective data quality problem solving. J Manag Inf Syst. 2003;20:93–119.

Download references

Acknowledgements

Not applicable.

This work is supported by fund #12R005 from ZCHS at UAE University.

Author information

Authors and affiliations.

College of Technological Innovation, Zayed University, P.O. Box 144534, Abu Dhabi, United Arab Emirates

Ikbal Taleb

College of Information Technology, UAE University, P.O. Box 15551, Al Ain, United Arab Emirates

Mohamed Adel Serhani

Department of Statistics, College of Business and Economics, UAE University, P.O. Box 15551, Al Ain, United Arab Emirates

Chafik Bouhaddioui

Concordia Institute for Information Systems Engineering, Concordia University, Montreal, QC, H4B 1R6, Canada

Rachida Dssouli

You can also search for this author in PubMed   Google Scholar

Contributions

IT conceived the main conceptual ideas related to Big data quality framework and proof outline. He designed the framework and their main modules, he also worked on the implementation and validation of some of the framework’s components. MAS supervised the study and was in charge of direction and planning, he also contributed to couple of sections including the introduction, abstract, the framework and the implementation and conclusion section. CB contributed to data preparation sampling and profiling, he also reviewed and validated all formulations and statistical modeling included in this work. RD contributed in the review and discussion of the core contributions and their validation. All authors read and approved the final manuscript.

Authors’ information

Dr. Ikbal Taleb is currently an Assistant Professor, College of Technological Information, Zayed University, Abu Dhabi, U.A.E. He got his Ph.D. in information and systems engineering from Concordia University in 2019, and MSc. in Software Engineering from the University of Montreal, Canada in 2006. His research interests include data and Big data quality, quality profiling, quality assessment, cloud computing, web services, and mobile web services.

Prof. M. Adel Serhani is currently a Professor, and Assistant Dean for Research and Graduate Studies College of Information Technology, U.A.E University, Al Ain, U.A.E. He is also an Adjunct faculty in CIISE, Concordia University, Canada. He holds a Ph.D. in Computer Engineering from Concordia University in 2006, and MSc. in Software Engineering from University of Montreal, Canada in 2002. His research interests include: Cloud for data intensive e-health applications, and services; SLA enforcement in Cloud Data centers, and Big data value chain, Cloud federation and monitoring, Non-invasive Smart health monitoring; management of communities of Web services; and Web services applications and security. He has a large experience earned throughout his involvement and management of different R&D projects. He served on several organizing and Technical Program Committees and he was the program Co-Chair of International Conference in Web Services (ICWS’2020), Co-chair of the IEEE conference on Innovations in Information Technology (IIT´13), Chair of IEEE Workshop on Web service (IWCMC´13), Chair of IEEE workshop on Web, Mobile, and Cloud Services (IWCMC´12), and Co-chair of International Workshop on Wireless Sensor Networks and their Applications (NDT´12). He has published around 130 refereed publications including conferences, journals, a book, and book chapters.

Dr. Chafik Bouhaddioui is an Associate Professor of Statistics in the College of Business and Economics at UAE University. He got his Ph.D. from University of Montreal in Canada. He worked as lecturer at Concordia University for 4 years. He has a rich experience in applied statistics in finance in private and public sectors. He worked as assistant researcher in Finance Ministry in Canada. He worked as Senior Analyst in National Bank of Canada and developed statistical methods used in stock market forecasting. He joined in 2004 a team of researchers in finance group at CIRANO in Canada to develop statistical tools and modules in finance and risk analysis. He published several papers in well-known journals in multivariate time series analysis and their applications in economics and finance. His area of research is diversified and includes modeling and prediction in multivariate time series, causality and independence tests, biostatistics, and Big Data.

Prof. Rachida Dssouli is a full professor and Director of Concordia Institute for Information Systems Engineering, Faculty of Engineering and Computer Science, Concordia University. Dr. Dssouli received a Master (1978), Diplome d'études Approfondies (1979), Doctorat de 3eme Cycle in Networking (1981) from Université Paul Sabatier, Toulouse, France. She earned her PhD degree in Computer Science (1987) from Université de Montréal, Canada. Her research interests are in Communication Software Engineering a sub discipline of Software Engineering. Her contributions are in Testing based on Formal Methods, Requirements Engineering, Systems Engineering, Telecommunication Service Engineering and Quality of Service. She published more than 200 papers in journals and referred conferences in her area of research. She supervised/ co-supervised more than 50 graduate students among them 20 PhD students. Dr. Dssouli is the founding Director of Concordia Institute for Information and Systems Engineering (CIISE) June 2002. The Institute hosts now more than 550 graduate students and 20 faculty members, 4 master programs, and a PhD program.

Corresponding author

Correspondence to Mohamed Adel Serhani .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Taleb, I., Serhani, M.A., Bouhaddioui, C. et al. Big data quality framework: a holistic approach to continuous quality management. J Big Data 8 , 76 (2021). https://doi.org/10.1186/s40537-021-00468-0

Download citation

Received : 06 February 2021

Accepted : 15 May 2021

Published : 29 May 2021

DOI : https://doi.org/10.1186/s40537-021-00468-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Big data quality
  • Quality assessment
  • Quality metrics and scores
  • Pre-processing

research paper about quality

  • Introduction
  • Article Information

Sum of unique quality measure among value-based care contracts in primary care physicians’ patient panels (n = 890 physicians). A unique quality measure is, for example, share of patients with blood pressure under 140/90 mm Hg. If the same metric appeared in multiple contracts, it was counted once. Vertical lines represent the mean number of quality measures per physician: 54.8, 64.1, and 52.4 in 2020, 2021, and 2022 respectively.

eMethods. Sample and Attribution of Patients to Physicians

Data Sharing Statement

See More About

Select your interests.

  • Academic Medicine
  • Acid Base, Electrolytes, Fluids
  • Allergy and Clinical Immunology
  • American Indian or Alaska Natives
  • Anesthesiology
  • Anticoagulation
  • Art and Images in Psychiatry
  • Artificial Intelligence
  • Assisted Reproduction
  • Bleeding and Transfusion
  • Caring for the Critically Ill Patient
  • Challenges in Clinical Electrocardiography
  • Climate and Health
  • Climate Change
  • Clinical Challenge
  • Clinical Decision Support
  • Clinical Implications of Basic Neuroscience
  • Clinical Pharmacy and Pharmacology
  • Complementary and Alternative Medicine
  • Consensus Statements
  • Coronavirus (COVID-19)
  • Critical Care Medicine
  • Cultural Competency
  • Dental Medicine
  • Dermatology
  • Diabetes and Endocrinology
  • Diagnostic Test Interpretation
  • Drug Development
  • Electronic Health Records
  • Emergency Medicine
  • End of Life, Hospice, Palliative Care
  • Environmental Health
  • Equity, Diversity, and Inclusion
  • Facial Plastic Surgery
  • Gastroenterology and Hepatology
  • Genetics and Genomics
  • Genomics and Precision Health
  • Global Health
  • Guide to Statistics and Methods
  • Hair Disorders
  • Health Care Delivery Models
  • Health Care Economics, Insurance, Payment
  • Health Care Quality
  • Health Care Reform
  • Health Care Safety
  • Health Care Workforce
  • Health Disparities
  • Health Inequities
  • Health Policy
  • Health Systems Science
  • History of Medicine
  • Hypertension
  • Images in Neurology
  • Implementation Science
  • Infectious Diseases
  • Innovations in Health Care Delivery
  • JAMA Infographic
  • Law and Medicine
  • Leading Change
  • Less is More
  • LGBTQIA Medicine
  • Lifestyle Behaviors
  • Medical Coding
  • Medical Devices and Equipment
  • Medical Education
  • Medical Education and Training
  • Medical Journals and Publishing
  • Mobile Health and Telemedicine
  • Narrative Medicine
  • Neuroscience and Psychiatry
  • Notable Notes
  • Nutrition, Obesity, Exercise
  • Obstetrics and Gynecology
  • Occupational Health
  • Ophthalmology
  • Orthopedics
  • Otolaryngology
  • Pain Medicine
  • Palliative Care
  • Pathology and Laboratory Medicine
  • Patient Care
  • Patient Information
  • Performance Improvement
  • Performance Measures
  • Perioperative Care and Consultation
  • Pharmacoeconomics
  • Pharmacoepidemiology
  • Pharmacogenetics
  • Pharmacy and Clinical Pharmacology
  • Physical Medicine and Rehabilitation
  • Physical Therapy
  • Physician Leadership
  • Population Health
  • Primary Care
  • Professional Well-being
  • Professionalism
  • Psychiatry and Behavioral Health
  • Public Health
  • Pulmonary Medicine
  • Regulatory Agencies
  • Reproductive Health
  • Research, Methods, Statistics
  • Resuscitation
  • Rheumatology
  • Risk Management
  • Scientific Discovery and the Future of Medicine
  • Shared Decision Making and Communication
  • Sleep Medicine
  • Sports Medicine
  • Stem Cell Transplantation
  • Substance Use and Addiction Medicine
  • Surgical Innovation
  • Surgical Pearls
  • Teachable Moment
  • Technology and Finance
  • The Art of JAMA
  • The Arts and Medicine
  • The Rational Clinical Examination
  • Tobacco and e-Cigarettes
  • Translational Medicine
  • Trauma and Injury
  • Treatment Adherence
  • Ultrasonography
  • Users' Guide to the Medical Literature
  • Vaccination
  • Venous Thromboembolism
  • Veterans Health
  • Women's Health
  • Workflow and Process
  • Wound Care, Infection, Healing

Others Also Liked

  • Download PDF
  • X Facebook More LinkedIn

Boone C , Zink A , Wright BJ , Robicsek A. Value-Based Contracting in Clinical Care. JAMA Health Forum. 2024;5(8):e242020. doi:10.1001/jamahealthforum.2024.2020

Manage citations:

© 2024

  • Permissions

Value-Based Contracting in Clinical Care

  • 1 Booth School of Business, University of Chicago, Chicago, Illinois
  • 2 Providence Research Network, Portland, Oregon

Value-based contracts are popular for quality improvement in primary care despite mixed evidence of their effectiveness. 1 - 3 Explanations for their underperformance include complexity of health care, misalignment of measures, and inadequate financial incentives. 1 - 3 Another potential unexplored factor is volume of quality measures, especially if clinicians face multiple contracts featuring different quality measures and reporting requirements. We quantified the number and diversity of quality measures and value-based contracts faced by primary care physicians (PCPs).

We obtained employment contract data on PCPs continuously employed by an integrated health system from 2020 to 2022 along with payer contracts associated with their attributed patients. Patients who interacted with the health system in the previous 2 years were attributed to 1 PCP annually and linked to a payer contract based on their insurance plan at year end (eMethods in Supplement 1 ). The Providence Research Network Institutional Review Board approved the study and waived informed consent because it was not considered human participant research. We followed the STROBE reporting guidelines.

Payer contract data included type (commercial, Medicaid, or Medicare), incentivized process- or outcome-based quality measures, and Health Care Payment Learning & Action Network (HCPLAN) category. 4 HCPLAN categories 2C, 3A, and 3B were considered value-based contracts.

We measured the number of unique value-based contracts and quality measures per physician-year based on their assigned patients. Quality measures were considered distinct if they referenced different conditions, and measures for the same condition were considered distinct if the value differed (eg, hemoglobin A 1c <8% vs <9% [to convert to proportion of total hemoglobin, multiply by 0.01]). We conducted 2-sample t tests to assess changes in exposure to value-based contracts and quality measures across years; P  < .05 indicated statistical significance. Results were robust to excluding PCPs with small panels.

Quality measure information was missing for 29% of value-based contracts with attributed patients; thus, we reported both mean number of contracts and mean number of contracts with nonmissing quality measures per physician-year. Analyses were run with R 4.4.0 (R Core Team).

The 809 PCPs included (519 females [58.3%], 371 males [41.7%]) had a mean (SD) of 1308.71 (622.73) attributed patients and an increasing number of value-based contracts from 2020 to 2022 (9.39 to 12.26; P  < .001) ( Table ). Contracts contained a mean (SD) of 10.24 (2.66) quality measures. Physicians faced a mean (SD) 57.08 (24.58) unique quality measures across 7.62 (5.08) value-based contracts ( Table ). Distinct measures per physician ranged from 0 to 103 ( Figure ). Medicare contracts had more quality measures on average than commercial or Medicaid contracts (13.42 vs 10.07 or 5..37). The mean (SD) number of quality measures in Medicare contracts increased significantly from 13.14 (4.72) in 2020 to 15.04 (3.99) in 2022 ( P  < .001) ( Table ).

Previous research on value-based contracts suggests these models have not lived up to their potential. 1 - 3 We found saturation of the quality measure environment as a possible explanation: average physicians were incentivized to meet 57.08 different quality measures annually.

Study limitations include estimates that were likely lower bounds on PCPs’ exposure to quality measures. Physicians often face additional quality measures in their employment contracts, but our data on contracts’ quality measures were incomplete. Percentage of missing data was lower in 2021, which may explain the larger number of unique quality measures that year. Additionally, the data source was an integrated health system with multiple payers; thus, findings may not generalize to other settings.

Value-based contracting is intended to incentivize care improvement, but it is unlikely a clinician or practice can reasonably optimize against 50 or more measures at a time. Increased use of such levers may also carry unintended consequences. Clarity and salience are crucial to changing behavior, 5 and the burden of extraneous information and processes has been increasingly associated with adverse outcomes, such as physician burnout. 6 As payers increasingly shift toward value-based contracts, additional research is needed to understand how their ubiquity affects their benefits and how such contracts can be scaled sustainably for clinical care.

Accepted for Publication: May 21, 2024.

Published: August 23, 2024. doi:10.1001/jamahealthforum.2024.2020

Open Access: This is an open access article distributed under the terms of the CC-BY License . © 2024 Boone C et al. JAMA Health Forum .

Corresponding Author: Claire Boone, PhD, Booth School of Business, University of Chicago, 5807 S Woodlawn Ave, Chicago, IL 60637 ( [email protected] ).

Author Contributions: Dr Boone had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Concept and design: Boone, Wright, Robicsek.

Acquisition, analysis, or interpretation of data: All authors.

Drafting of the manuscript: Boone, Zink, Wright.

Critical review of the manuscript for important intellectual content: All authors.

Statistical analysis: Boone.

Obtained funding: Robicsek.

Administrative, technical, or material support: Wright, Robicsek.

Supervision: Wright, Robicsek.

Conflict of Interest Disclosures: Dr Boone reported receiving a grant from the National Institute on Aging of the National Institutes of Health. No other disclosures were reported.

Data Sharing Statement: See Supplement 2 .

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

computers-logo

Article Menu

research paper about quality

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Research on identification of critical quality features of machining processes based on complex networks and entropy-critic methods.

research paper about quality

1. Introduction

2. establishment of machining process network based on complex networks, 2.1. the definition of processing technology network node.

  • Machining features are the basic unit of part processing used to describe the geometry and topological affinity of the workpiece surface. In the machining process, the state of machining features constantly changes, essentially the evolution of quality features.
  • The quality feature contains information containing nominal requirements, tolerances, and actual error parameters. It is categorized into two types of quality features with and without datum reference.
  • The machining element is the direct error source that causes the error in process quality features. It mainly includes machine tools (MT), cutting tools (CT), and fixtures (FT).

2.2. Definition of Edge Relationship between Nodes in Machining Process Network

  • The evolution of a processing feature at different stages is called an evolutionary relationship.
  • In multi-process machining, the localization between workpiece datums is called the localization relationship.
  • The relationship between different processing stages using different elements is called processing relationship.
  • A machining feature usually contains one or more quality features, and this coupling relationship between a machining quality feature and its plural quality features is defined as an attribute relationship.

2.3. Construction of Processing Technology Network

3. critical process identification based on the machining process network.

  • In order to better represent the directionality of nodes in a machining process network, a directional centrality degree is proposed to quantify the importance of each machining quality node in a directed network system. The directional centrality degree can be used to measure the coupling relationship between each quality feature, expressed through a node’s in-degree and out-degree. Equation (5) can express the stochastic matrix of node Q = ( q i j ) based on directional centrality degree. q i j = k i int + k i o u t ∑ p ∈ B k p int + ∑ p ∈ B k p o u t a i j (5) where B represents the set of nodes of node i , and p ∈ B .
  • When the number of iterations is stabilized or the maximum number of iterations is reached, the background node values are assigned according to the proportion of LR values of each node. L R i = L R i ( k c ) + L R i ( k c ) ∑ i = 1 N L R i ( k c ) L R g ( k c ) (6)

4. Determination of Critical Quality Features

4.1. determination of critical quality features based on entropy weight method, 4.2. determination of critical quality features based on the critic method, 4.3. determination of critical quality features based on entropy-critic method, 5. instance analysis, 6. conclusions, author contributions, data availability statement, conflicts of interest.

  • Chen, Q.; Liu, W.; Jiang, X.; Xu, S.; Wang, Y.; Liu, O. Identification and clustering analysis of critical processes for multi-variety and small-batch manufacturing processes. Comput. Integr. Manuf. Syst. 2022 , 28 , 812–825. [ Google Scholar ]
  • Lu, C. Study on Prediction of Surface Quality in Machining Process. J. Mater. Process. Technol. 2008 , 205 , 439–450. [ Google Scholar ] [ CrossRef ]
  • Yang, C.; Zhang, Q.; Li, H.; Xie, Q. Research on the decision-making of crucial process quality control points based on the AHP method. Min. Mach. 2004 , 2 , 57–59+5. [ Google Scholar ]
  • He, H.; Tang, X. Product quality assurance design based on critical quality features. J. Aeronaut. 2007 , 6 , 1468–1481. [ Google Scholar ]
  • Zhang, G.; Ji, F.; Ren, X.; Ge, H.; Zhang, S. A model for extracting critical quality features of complex electromechanical products. J. Chongqing Univ. 2010 , 32 , 8–14. [ Google Scholar ]
  • Wang, H.; Fang, Z.; Wang, D.; Liu, S. An integrated fuzzy QFD and grey decision-making approach for supply chain collaborative quality design of large complex products. Comput. Ind. Eng. 2020 , 140 , 106212. [ Google Scholar ] [ CrossRef ]
  • Li, L.; Jing, J.; Jiang, X.; Song, B. Identification method of critical quality features for the redesign of used machine tools. Mach. Des. Manuf. 2018 , 5 , 151–154. [ Google Scholar ]
  • Qiao, P. Research on Complex Product Quality Prediction Based on Improved LASSO-RF. Master’s Thesis, Zhengzhou University, Zhengzhou, China, 2019. [ Google Scholar ]
  • Ma, L.; Mao, J. Research on the identification of critical quality features based on the entropy weight method and Mahalanobis-Taguchi method. Light Ind. Mach. 2017 , 35 , 101–105. [ Google Scholar ]
  • Wang, N.; Yan, N.; Xu, Y.; Yang, J. Identification of crucial quality features in complex multi-process manufacturing process. Stat. Decis.-Mak. 2021 , 37 , 177–180. [ Google Scholar ]
  • Jin, N.; Zhou, S. Data-driven variation source identification for manufacturing process using the eigenspace comparison method. Nav. Res. Logist. 2010 , 53 , 383–396. [ Google Scholar ] [ CrossRef ]
  • Wang, H.; Liang, L.; Niu, Z.; He, Z. Identification of CTQs for complex products based on mutual information and improved gravitational search algorithm. Math. Probl. Eng. 2015 , 2015 , 765985. [ Google Scholar ] [ CrossRef ]
  • He, Z.; Hu, H.; Zhang, M.; Zhang, Y.; Li, A.-D. A Decomposition-Based Multi-Objective Particle Swarm Optimization Algorithm with a Local Search Strategy for Key Quality Characteristic Identification in Production Processes. Comput. Ind. Eng. 2022 , 172 , 108617. [ Google Scholar ] [ CrossRef ]
  • Wang, N.; Xu, C.; Yang, F. Method of Identifying Key Quality Characteristics in Multistage Manufacturing Process Based on PLSR. Appl. Mech. Mater. 2012 , 217 , 2580–2584. [ Google Scholar ] [ CrossRef ]
  • Zhang, F.; Jiang, P. A review of research on applying complex network theory to discrete shop floor production processes. Ind. Eng. 2016 , 19 , 1–8. [ Google Scholar ] [ CrossRef ]
  • Özel, T.; Karpat, Y. Predictive Modeling of Surface Roughness and Tool Wear in Hard Turning Using Regression and Neural Networks. Int. J. Mach. Tools Manuf. 2005 , 45 , 467–479. [ Google Scholar ] [ CrossRef ]
  • Su, Y.; Ye, W. The protection and recovery strategy development of dynamic resilience analysis and cost consideration in the infrastructure network. J. Comput. Des. Eng. 2022 , 9 , 168–186. [ Google Scholar ] [ CrossRef ]
  • Ibne Hossain, N.U.; Nagahi, M.; Jaradat, R.; Shah, C.; Buchanan, R.; Hamilton, M. Modeling and Assessing Cyber Resilience of Smart Grid Using Bayesian Network-Based Approach: A System of Systems Problem. J. Comput. Des. Eng. 2020 , 7 , 352–366. [ Google Scholar ] [ CrossRef ]
  • Tan, F.; Wu, J.; Xia, Y.; Tse, C.K. Traffic Congestion in Interconnected Complex Networks. Phys. Rev. E 2014 , 89 , 062813. [ Google Scholar ] [ CrossRef ]
  • Qu, D.; Li, C.; Gu, C.; Yao, Y.; Zhan, Y. Research on Identification of Key Processes in Machining Process Based on PageRank Algorithm. Adv. Mech. Eng. 2024 , 16 , 16878132241229988. [ Google Scholar ] [ CrossRef ]
  • Yu, J.; Zhu, P. Weighted Self-Regulation Complex Network-Based Variation Modeling and Error Source Diagnosis of Hybrid Multistage Machining Processes. IEEE Access 2019 , 7 , 36033–36044. [ Google Scholar ] [ CrossRef ]
  • Jiang, P.; Jia, F.; Wang, Y.; Zheng, M. Real-time quality monitoring and predicting model based on error propagation networks for multistage machining processes. J. Intell. Manuf. 2014 , 25 , 521–538. [ Google Scholar ] [ CrossRef ]
  • Liu, D.; Jiang, P. The complexity analysis of a machining error propagation network and its application. Proc. Inst. Mech. Eng. Part B J. Eng. Manuf. 2009 , 223 , 623–640. [ Google Scholar ] [ CrossRef ]
  • Zhang, F.; Jiang, P. Complexity analysis of distributed measuring and sensing network in multistage machining processes. J. Intell. Manuf. 2013 , 24 , 55–69. [ Google Scholar ] [ CrossRef ]
  • Jiang, P.; Wang, Y.; Wang, H.; Zheng, M. Multi-process machining quality prediction based on assignment-based error transfer network. J. Mech. Eng. 2013 , 49 , 160–170. [ Google Scholar ] [ CrossRef ]
  • Jia, F.; Jiang, P.; Liu, D.; Zheng, M. Error propagation control method for blade batch processing. CIMS 2012 , 18 , 76–86. [ Google Scholar ] [ CrossRef ]
  • Ma, N.; Guan, J.; Zhao, Y. Bringing PageRank to the citation analysis. Inf. Process. Manag. 2008 , 44 , 800–810. [ Google Scholar ] [ CrossRef ]
  • Li, Q.; Zhou, T.; Lu, L.; Chen, D. Identifying influential spreaders by weighted LeaderRank. Phys. A Stat. Mech. Its Appl. 2014 , 404 , 47–55. [ Google Scholar ] [ CrossRef ]
  • Yuan, Q. Key Process and Quality Characteristic Identification for Manufacturing Systems Using Dynamic Weighting Function and DS Evidence Theory. Int. J. Perform. Eng. 2018 , 14 , 1651. [ Google Scholar ] [ CrossRef ]
  • Lu, Z.; Liu, C.; Liao, C.; Zhu, J.; Liu, H.; Chen, Y. Conceptual design and optimization of polymer gear system for low-thrust turbofan aeroengine accessory transmission. J. Comput. Des. Eng. 2024 , 11 , 212–229. [ Google Scholar ] [ CrossRef ]
  • Jing, L.; Sheng, G.; Wang, W.; Wu, J.; Xu, X. Research on spot market monitoring and evaluation based on range maximization AHP-CRITIC. J. North China Electr. Power Univ. (Nat. Sci. Ed.) 2022 , 49 , 110–117. [ Google Scholar ]

Click here to enlarge figure

Node TypeNode TypeInstance
Machining feature nodesMF + process ID + machining feature IDMF020A represents the machining feature A of machining process number 020.
Quality feature nodesQF + process ID + machining feature ID + quality feature IDQF020Aa represents the quality feature of machining feature A of machining process number 020.
Machining element nodesMT + machine model; CT + tool model; FT + fixture modelMT01 represents machine tool 01; CT01 represents tool 01;
FT01 represents fixture 01.
Process NumberMFMTCTFTQF
10MF010AMTNo. 718CTE33-171SFTPF00-64-014QF010Aa
QF010Ab
QF010Ac
QF010Ad
QF010Ae
QF010Af
QF010Ag
QF010Ah
QF010Ai
25MF025AMTVMC1600CTE31-412S
CTE33-170S
FTPF00-64-015QF025Aa
MF025BQF025Ba
QF025Bb
95MF095AMTHCMC-2082CTK31-152S
CTE31-297S
CTE31-421S
CTL31-051S
FTPF00-64-018QF095Aa
QF095Ab
QF095Ac
QF095Ad
QF095Ae
105MF105AMTZ35CTPF00-42-001S
CTPF00-42-002S
QF105Aa
QF105Ab
QF105Ac
QF105Ad
Essential ProcessSerial NumberQuality FeaturesEssential ProcessSerial NumberQuality Features
030Q1QF030Aa055Q7QF055Aa
Q2QF030AbQ8QF055Ab
Q3QF030AdQ9QF055Ac
Q4QF030Ae070Q16QF070Ba
Q5QF030AfQ17QF070Bb
Q6QF030AgQ18QF070Bc
065Q10QF065BaQ19QF070Bd
Q11QF065BbQ20QF070Be
Q12QF065BcQ21QF070Bg
Q13QF065BdQ22QF070Bh
Q14QF065Bh095Q23QF095Aa
Q15QF065BiQ24QF095Ab
SampleQ1Q2Q3Q4Q5Q6 Q23Q24
122.7610.91570.3322.7610.90592.28 252.967141.981
222.7810.9170.3222.7810.9192.14252.969141.969
322.7910.91570.3322.7910.91592.16252.971141.971
422.8010.9270.3422.810.9292.18252.973141.973
522.82510.92570.3522.82510.92592.2252.975141.975
622.8310.9370.3622.8310.9392.22252.977141.977
722.8510.93570.3722.8510.93592.24252.979141.979
822.8610.9470.3822.8610.9492.26252.981141.981
922.8810.94570.3922.8810.94592.28252.983141.971
1022.8610.9570.422.7910.9292.16252.985141.985
4522.8510.93570.3822.8510.93592.24 252.975141.977
4622.8510.93570.3822.8510.9492.24252.975141.977
4722.8510.93570.3822.8510.9492.26252.975141.977
4822.8510.93570.3922.8610.94592.26252.975141.977
4922.8510.9470.3922.8610.94592.26252.975141.977
5022.8810.94570.422.8810.94592.26252.975141.977
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Qu, D.; Liang, W.; Zhang, Y.; Gu, C.; Zhou, G.; Zhan, Y. Research on Identification of Critical Quality Features of Machining Processes Based on Complex Networks and Entropy-CRITIC Methods. Computers 2024 , 13 , 216. https://doi.org/10.3390/computers13090216

Qu D, Liang W, Zhang Y, Gu C, Zhou G, Zhan Y. Research on Identification of Critical Quality Features of Machining Processes Based on Complex Networks and Entropy-CRITIC Methods. Computers . 2024; 13(9):216. https://doi.org/10.3390/computers13090216

Qu, Dongyue, Wenchao Liang, Yuting Zhang, Chaoyun Gu, Guangyu Zhou, and Yong Zhan. 2024. "Research on Identification of Critical Quality Features of Machining Processes Based on Complex Networks and Entropy-CRITIC Methods" Computers 13, no. 9: 216. https://doi.org/10.3390/computers13090216

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

Enhancing low-quality video detection with EEG and ERP techniques

  • Download PDF Copy

In a research paper, scientists from the Beijing Institute of Technology proposed an event-related potential (ERP) extraction method to solve the asynchronous problem of low-quality video target detection, designed the time-frequency features based on continuous wavelet transform, and established an EEG decoding model based on neural characterization. The average decoding accuracy of 84.56% is achieved in pseudo-online test.

The new research paper, published July 4 in the journal Cyborg and Bionic Systems, introduces a low-quality video object detection technique based on EEG signals and an ERP alignment method based on eye movement signals, demonstrating proven effectiveness and feasibility. The technology is expected to be widely used in military, civil and medical fields.

According to Fei, "Machine vision technology has developed rapidly in recent years. Image processing and recognition are very efficient. However, identifying low-quality targets remains a challenge for machine vision."

Based on these problems, Fei, the author of this study, proposed a solution: a) designed a new low-quality video target detection experimental paradigm to simulate UAV reconnaissance video in complex environments. b) An eye movement synchronization method based on eye movement signals was designed to determine the target recognition time by analyzing different eye movement types, so as to accurately extract ERP fragments. c) Neural representations in the process of target recognition are analyzed, including time domain, frequency domain and source space domain. d) Designed time-frequency features based on continuous wavelet transform, and constructed a low-quality video target EEG decoding model.

The authors say this work is the first to explore EEG based low-quality video object detection, breaking the limitations of using only clear and eye-catching video objects or the RSVP paradigm. In addition, to solve the problem of asynchronous detection in video object detection, an ERP alignment method based on eye movement signal is proposed, and a low-quality video object detection method based on EEG is developed, which is conducive to the practical application of this kind of brain-computer interface.

Fei said, " We simply simulated the low quality of the target due to factors such as weather, environment, or the target being partially obscured by clouds, waves, and islands. Although the simulation can reflect the challenge of low-quality video target detection to a certain extent, it is still relatively simple compared with the complex and changeable low-quality situations that may be encountered in the actual scene. To better apply the target detection technology based on EEG to the human-computer interaction system, it is necessary to further study the influence of different video object quality parameters (video size, definition, and screen complexity) on target detection."

Related Stories

  • Eye-tracking measures could help make the right autism diagnosis sooner
  • Thyroid eye disease patients experience lasting improvements after treatment with teprotumumab
  • Semaglutide linked to increased risk of severe eye condition NAION in new study

In conclusion, the proposed method based on eye movement signals can perform ERP alignment more efficiently and achieve higher target recognition accuracy (84.56%). The technology can be applied to military reconnaissance, disaster relief, monitoring, medical and other fields to help quickly identify key targets.

Authors of the paper include Jianting Shi, Luzheng Bi, Xinbo Xu, Aberham Genetu Feleke, and Weijie Fei.

This work was supported by Basic Research Plan under Grant JCKY2022602C024.

Beijing Institute of Technology Press Co., Ltd

Shi, J. , et al . (2024). Low-quality Video Target Detection Based on EEG Signal using Eye Movement Alignment.  Cyborg and Bionic Systems . doi.org/10.34133/cbsystems.0121 .

Posted in: Device / Technology News | Medical Science News

Tags: Brain , Eye , Frequency , Research , Technology

Suggested Reading

Retinoblastoma therapy evolves with higher success rates and lower complications

Cancel reply to comment

  • Trending Stories
  • Latest Interviews
  • Top Health Articles

New report reveals the truth behind plant-based protein alternatives

Global and Local Efforts to Take Action Against Hepatitis

Lindsey Hiebert and James Amugsi

In this interview, we explore global and local efforts to combat viral hepatitis with Lindsey Hiebert, Deputy Director of the Coalition for Global Hepatitis Elimination (CGHE), and James Amugsi, a Mandela Washington Fellow and Physician Assistant at Sandema Hospital in Ghana. Together, they provide valuable insights into the challenges, successes, and the importance of partnerships in the fight against hepatitis.

Global and Local Efforts to Take Action Against Hepatitis

Addressing Important Cardiac Biology Questions with Shotgun Top-Down Proteomics

In this interview conducted at Pittcon 2024, we spoke to Professor John Yates about capturing cardiomyocyte cell-to-cell heterogeneity via shotgun top-down proteomics.

Addressing Important Cardiac Biology Questions with Shotgun Top-Down Proteomics

A Discussion with Hologic’s Tim Simpson on the Future of Cervical Cancer Screening

Tim Simpson

Hologic’s Tim Simpson Discusses the Future of Cervical Cancer Screening.

A Discussion with Hologic’s Tim Simpson on the Future of Cervical Cancer Screening

Latest News

New study highlights potential of childhood immunization against HIV

Newsletters you may be interested in

Medical Device

Your AI Powered Scientific Assistant

Hi, I'm Azthena, you can trust me to find commercial scientific answers from News-Medical.net.

A few things you need to know before we start. Please read and accept to continue.

  • Use of “Azthena” is subject to the terms and conditions of use as set out by OpenAI .
  • Content provided on any AZoNetwork sites are subject to the site Terms & Conditions and Privacy Policy .
  • Large Language Models can make mistakes. Consider checking important information.

Great. Ask your question.

Azthena may occasionally provide inaccurate responses. Read the full terms .

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions .

Provide Feedback

research paper about quality

  • Publications
  • News and Events
  • Education and Outreach

Software Engineering Institute

Sei digital library, latest publications, embracing ai: unlocking scalability and transformation through generative text, imagery, and synthetic audio, august 28, 2024 • webcast, by tyler brooks , shannon gallagher , dominic a. ross.

In this webcast, Tyler Brooks, Shannon Gallagher, and Dominic Ross aim to demystify AI and illustrate its transformative power in achieving scalability, adapting to changing landscapes, and driving digital innovation.

Counter AI: What Is It and What Can You Do About It?

August 27, 2024 • white paper, by nathan m. vanhoudnos , carol j. smith , matt churilla , shing-hon lau , lauren mcilvenny , greg touhill.

This paper describes counter artificial intelligence (AI) and provides recommendations on what can be done about it.

Using Quality Attribute Scenarios for ML Model Test Case Generation

August 27, 2024 • conference paper, by rachel brower-sinning , grace lewis , sebastián echeverría , ipek ozkaya.

This paper presents an approach based on quality attribute (QA) scenarios to elicit and define system- and model-relevant test cases for ML models.

3 API Security Risks (and How to Protect Against Them)

August 27, 2024 • podcast, by mckinley sconiers-hasan.

McKinley Sconiers-Hasan discusses three API risks and how to address them through the lens of zero trust.

Lessons Learned in Coordinated Disclosure for Artificial Intelligence and Machine Learning Systems

August 20, 2024 • white paper, by allen d. householder , vijay s. sarvepalli , jeff havrilla , matt churilla , lena pons , shing-hon lau , nathan m. vanhoudnos , andrew kompanek , lauren mcilvenny.

In this paper, the authors describe lessons learned from coordinating AI and ML vulnerabilities at the SEI's CERT/CC.

On the Design, Development, and Testing of Modern APIs

July 30, 2024 • white paper, by alejandro gomez , alex vesey.

This white paper discusses the design, desired qualities, development, testing, support, and security of modern application programming interfaces (APIs).

Evaluating Large Language Models for Cybersecurity Tasks: Challenges and Best Practices

July 26, 2024 • podcast, by jeff gennari , samuel j. perl.

Jeff Gennari and Sam Perl discuss applications for LLMs in cybersecurity, potential challenges, and recommendations for evaluating LLMs.

Capability-based Planning for Early-Stage Software Development

July 24, 2024 • podcast, by anandi hira , bill nichols.

This SEI podcast introduces capability-based planning (CBP) and its use and application in software acquisition.

A Model Problem for Assurance Research: An Autonomous Humanitarian Mission Scenario

July 23, 2024 • technical note, by gabriel moreno , anton hristozov , john e. robert , mark h. klein.

This report describes a model problem to support research in large-scale assurance.

Safeguarding Against Recent Vulnerabilities Related to Rust

June 28, 2024 • podcast, by david svoboda.

David Svoboda discusses two vulnerabilities related to Rust, their sources, and how to mitigate them.

IMAGES

  1. (PDF) How to write a Quality Research Paper?

    research paper about quality

  2. Managing Quality

    research paper about quality

  3. National Level Workshop on How to Write a Quality Research Paper.

    research paper about quality

  4. How to write about methodology in a research paper

    research paper about quality

  5. Quality Management: A Comprehensive Exploration Free Essay Example

    research paper about quality

  6. (PDF) Qualitative Research paper.edited

    research paper about quality

VIDEO

  1. Finally engagement💅💍

  2. 1 Writing the Introduction of a Research Paper for Publication

  3. Quality versus quantity in research papers?

  4. How to make a Phylogenetic tree with two or more node values (publishing quality tree)

  5. Quality Management: What You Need To Ensure Quality Products & Services

  6. FDANews: Quality Metrics: Essential to Quality

COMMENTS

  1. Research quality: What it is, and how to achieve it

    1. Introduction Researchers are under extreme pressure to publish high-quality research. What defines such research, though? Is it only research published as articles in journals recognized worldwide as journals of distinction and exemplars of excellence?

  2. Full article: Quality 2030: quality management for the future

    The purpose of this paper is to highlight themes that have been identified as vital and important for research projects within QM during the coming decade. The paper is also an attempt to initiate research for the emerging 2030 agenda for quality management, here referred to as 'Quality 2030'.

  3. Defining and assessing research quality in a transdisciplinary context

    Abstract Research increasingly seeks both to generate knowledge and to contribute to real-world solutions, with strong emphasis on context and social engagement. As boundaries between disciplines are crossed, and as research engages more with stakeholders in complex systems, traditional academic definitions and criteria of research quality are no longer sufficient—there is a need for a ...

  4. Full article: Four decades of research on quality: summarising

    The paper applies data- and text modelling methodology to a chronological dataset covering 37 years and consisting of scientific journals specialising in research on quality; it also includes scientific journals with a broader spectrum of operations management (OM) research.

  5. A Review of the Quality Indicators of Rigor in Qualitative Research

    Attributes of rigor and quality and suggested best practices for qualitative research design as they relate to the steps of designing, conducting, and reporting qualitative research in health professions educational scholarship are presented. A research ...

  6. (PDF) Assessment of Research Quality

    This paper considers assessment of research quality by focusing on definition and. solution of research problems. W e develop and discuss, across different classes of. problems, a set of general ...

  7. Assessing the quality of research

    Inflexible use of evidence hierarchies confuses practitioners and irritates researchers. So how can we improve the way we assess research?

  8. Evaluating research: A multidisciplinary approach to assessing research

    Abstract There are few widely acknowledged quality standards for research practice, and few definitions of what constitutes good research. The overall aim was therefore to describe what constitutes research, and then to use this description to develop a model of research practice and to define concepts related to its quality.

  9. Citations, Citation Indicators, and Research Quality: An Overview of

    Abstract Citations are increasingly used as performance indicators in research policy and within the research system. Usually, citations are assumed to reflect the impact of the research or its quality. What is the justification for these assumptions and how do citations relate to research quality?

  10. Full article: Quality Control for Scientific Research: Addressing

    Quality metrics could be designed through the application of this statistical process control for the research enterprise. We argue that one quality control metric—the probability that a research hypothesis is true—is required to address at least relevance and may also be part of the solution for improving responsiveness and reproducibility.

  11. Criteria for Good Qualitative Research: A Comprehensive Review

    This review aims to synthesize a published set of evaluative criteria for good qualitative research. The aim is to shed light on existing standards for assessing the rigor of qualitative research encompassing a range of epistemological and ontological standpoints. Using a systematic search strategy, published journal articles that deliberate criteria for rigorous research were identified. Then ...

  12. Quality Improvement Projects and Clinical Research Studies

    Quality Improvement. As leaders in health care, advanced practitioners often conduct QI projects to improve their internal processes or streamline clinical workflow. These QI projects use a multidisciplinary team comprising a team leader as well as nurses, PAs, pharmacists, physicians, social workers, and program administrators to address ...

  13. Quality in Research: Asking the Right Question

    Definitions of breastfeeding: Call for the development and use of consistent definitions in research and peer-reviewed literature. Breastfeeding Medicine, 7 (6), 397-402.

  14. Assessing the Quality of Education Research Through Its Relevance to

    The findings suggest that using RPPs to assess the quality of education research enhances the relevance to policy and practice as well as attention to the quality of reporting, and pivots from the preeminence of methodological quality. RPPs increase local education leaders' access to research and bolster the use of research.

  15. How to … assess the quality of qualitative research

    A further important marker for assessing the quality of a qualitative study is that the theoretical or conceptual framework is aligned with the research design, the research question (s) and the methodology used in the study, as well as with the reporting of the research findings. High-quality qualitative research necessitates critical reflection and a justification of the selected framework ...

  16. How do you determine the quality of a journal article?

    The quality of the source is thus high enough to use it. So, if a journal is not listed in the Journal Quality List then it is worthwhile to google it. You will then find out more about the quality of the journal. 2. Who is the author? The next step is to look at who the author of the article is:

  17. Total Quality Management Practices' Effects on Quality Performance and

    TQM and performance relationship is a popular discussion in the literature, quality performance and TQM relationship is supported with various studies but the findings about innovative performance is inconsistent. However, most scholars stress on the importance of TQM activities on performance outcomes. The main goal of the study is to investigate whether TQM activities affect quality and/or ...

  18. quality assurance Latest Research Papers

    Abstract: The purpose of this research is to look at the educational achievements of students through an internal quality assurance system and as a tool to achieve and maintain school progress. Research, with a quantative approach. The data obtained is obtained through interview techniques, observations, and library studies.

  19. Total Quality Management Research Paper Topics

    Total quality management research paper topics have grown to become an essential area of study, reflecting the critical role that quality assurance and continuous improvement play in modern organizations. This subject encompasses a wide array of topics, methodologies, and applications, all aimed at enhancing operational efficiency, customer ...

  20. Big data quality framework: a holistic approach to continuous quality

    Big Data is an essential research area for governments, institutions, and private agencies to support their analytics decisions. Big Data refers to all about data, how it is collected, processed, and analyzed to generate value-added data-driven insights and decisions. Degradation in Data Quality may result in unpredictable consequences. In this case, confidence and worthiness in the data and ...

  21. The Many Meanings of Quality: Towards a Definition in Support of

    A short revisit of key research on the concept of quality is presented in a classical overview after this introduction. A conceptual framework of multiple perspectives on quality is then presented from the overview, before concluding the paper with a discussion and conclusions.

  22. Quality of nursing care: Predictors of patient satisfaction in a

    1 INTRODUCTION. Quality of care is how healthcare services achieve the desired health outcomes and meet the empirical evidence of practice. Quality of care can be evaluated by assessing the presence of effective, safe, patient-centred, equitable and efficient services (World Health Organization, 2020).Poor-quality healthcare has a wide range of negative consequences for the patient and the ...

  23. FEATURES OF QUALITY EDUCATION

    FEATURES OF QUALITY EDUCATION By Prof S.G.N Eze Faculty of Education, ESUT, Enugu Introduction. These two words, "quality" and "education" are commonly and carelessly used in every day

  24. Developments in Quality of Work-Life Research and Directions for Future

    Objective of this paper was to observe trends and developments in quality of work-life research throughout the decades. Previous researchers mostly focused on s...

  25. Value-Based Contracting in Clinical Care

    Value-based contracts are popular for quality improvement in primary care despite mixed evidence of their effectiveness. 1-3 Explanations for their underperformance include complexity of health care, misalignment of measures, and inadequate financial incentives. 1-3 Another potential unexplored factor is volume of quality measures, especially if clinicians face multiple contracts featuring ...

  26. Research on Identification of Critical Quality Features of Machining

    Aiming at the difficulty in effectively identifying critical quality features in the complex machining process, this paper proposes a critical quality feature recognition method based on a machining process network. Firstly, the machining process network model is constructed based on the complex network theory. The LeaderRank algorithm is used to identify the critical processes in the ...

  27. August 2024

    August, 2024 Eye on Research . Commentary The AACRAO and SOVA partnership on the LEARN Commission is opening new avenues for AACRAO to contribute to the higher-education dialogue on credit mobility. Our role as SOVA's partner involves crafting four green papers to inform the Commission's work, each tackling a crucial aspect of this complex issue.

  28. Enhancing low-quality video detection with EEG and ERP techniques

    The new research paper, published July 4 in the journal Cyborg and Bionic Systems, introduces a low-quality video object detection technique based on EEG signals and an ERP alignment method based ...

  29. Defining Quality in Higher Education and Identifying Opportunities for

    Completeness addresses the quality of learning materials and services and accessibility and convenience address the ease of access to these learning materials and services. This paper expands upon the definition of quality in higher education, focusing on student dissatisfaction. The classification of student feedback provides a unique perspective.

  30. SEI Digital Library

    The SEI Digital Library provides access to more than 6,000 documents from three decades of research into best practices in software engineering. These documents include technical reports, presentations, webcasts, podcasts and other materials searchable by user-supplied keywords and organized by topic, publication type, publication year, and author.