RESQUE Profile for Daniel Kristanto

The “fingerprint” of how research is conducted, when only the best work is submitted.

Overview of analysis:
Name Daniel Kristanto
Analysis date 2025-05-02
Academic age (years since PhD, minus child care etc.) 3 (PhD: 2021; subtracted years: 1)
# of analysed outputs 10
version 0.3.0
ORCID https://orcid.org/0000-0003-4729-8839
RaterType Applicant
Note

This is a preview which shows some visual summaries of the RESQUE indicators. Not all indicators have been covered yet, and things might change substantially. No bonus points have been assigned to the theory indicators yet, and also not to some other indicators of sample characteristics.

Preprocessing Notes
  • For 1 publication(s) with P_Data_Source = ‘Reuse of someone else’s existing data set’ or P_Data = ‘No’, P_Data_Open has been set to ‘notApplicable’ and a justification has been added.)
  • 1 publication(s) were removed from the rigor score computations because no indicators were provided.

Some parts of this profile are purely descriptive. For example, it is summarized the type of participant samples, or whether researchers predominantly work with psychophysiological data or rather focus on questionnaire studies.

Other parts contain an evaluative aspect: For example, research that is reproducible, which allows independent auditing because it provides open data and scripts is, ceteris paribus, better than research that does not have these aspects. Research outputs with these objective quality criteria of methodological rigor can gain “bonus points” which are summed across all provided research outputs and contribute to the Rigor Profile Overview.

We took care not to systematically disadvantage certain fields or research styles. Generally, the rigor score is a relative score, computed as “percentage of maximal points” (POMP) score across all indicators that are applicable. For any indicator, one can choose the option “not applicable” if an indicator principally cannot be attained by a research output. The points of such non-applicable indicators are removed from the maximum points and therefore do not lower the computed relative rigor score. However, in order to prevent gaming of this scheme, any “not applicable” claim needs to be justified. Only when the justification is accepted by the committee, the point is removed. With no or insufficient justification, in contrast, the indicator is set to “not available” (=0 points) and the maximum points are not adjusted.

Submitted research outputs

The 10 publications had the following methodological type:

Type of method # papers
Empirical Qualitative 5
Empirical Quantitative 2
Nonempirical 3

Team science in publications?

10 out of 10 submitted publications could be automatically retrieved with OpenAlex.

Team category Frequency %
Single authored 0 0%
Small team (<= 5 co-authors) 7 70%
Large team (6-15 co-authors) 3 30%
Big Team (>= 16 co-authors) 0 0%

Types of research data

Data type # of papers
Physiological 6
Behavioral 4
Questionnaire Selfreport 1
Other Type 1

⤷ Types of behavioral data

Data type # of papers
Performance 4

⤷ Types of physiological data

Data type # of papers
f M R I 5
E E G 1

Types of samples

Type of population/sample and cultural background for the 3 papers with own new data collection (“Other” excluded):

Types of Research Designs

Contributorship profile (CRediT roles)

Based on 10 submitted publications, this is the self-reported contributorship profile:

Indicators of Research Transparency and Reproducibility

The relative rigor score (RRS) is computed as a “percentage of maximal points” (POMP) score of multiple indicators. The indicators are grouped into four categories: Open Data, Preregistration, Reproducible Code & Verification, and Open Materials. Indicators that are flagged as “not applicable” are removed from the maximum points and therefore do not lower the RRS.

The following charts are based on 9 scoreable publications.

What is a “good” Relative Rigor Score?

The RESQUE indicators cover current best practices, which often are not yet used broadly. Therefore the scores might look meager, even for very good papers. Tentative norm values for the overall rigor score, based on some first evaluation studies are:

  • 10-20% can be considered average
  • 30% is very good
  • >40% is excellent.
  • Quantity of openness: How often did they do it?
    Each small square represents one publication, where the open practice (e.g., open data) has been performed (), not performed (), or was not applicable ().
  • Quality of openness: How well did they do it?
    The colors of the bar below the squares are based on normative values of the Relative Rigor Score (i.e., “What quality of a practice could reasonably be expected in that field?”).

The general philosophy of RESQUE is: It doesn’t matter so much what kind of research you do - but when you do it, you should do it in a high quality. The radar chart with the Relative Rigor Score helps you to see how many quality indicators have been fulfilled in multiple areas of methodological rigor.

  • The width of each sector corresponds to the maximal number of rigor points one could gain. If many indicators are flagged as “not applicable”, then the maximal points get reduced and the sector gets more narrow.
  • The colored part of each sector shows the achieved rigor points. An entirely grey sector indicates that no rigor points could be awarded at all.
  • The quality indicators measure both the presence of a practice (e.g., is Open Data available?) and the quality of the practice (e.g., is does it have a codebook? Does have a persistent identifier?). Hence, even if the pie charts in the table above show the presence, a lack of quality indicators can lead to a low rigor score.

Scientific impact

BIP! Scholar (a non-commercial open-source service to facilitate fair researcher assessment) provides five impact classes based on norm values:

🚀 Top 0.01% ️🌟 Top 0.1% ️✨ Top 1% Top 10% Average (Bottom 90%)

This indicator reflects impact/attention of an article in the research community at large. It is based on AttRank, a variation of PageRank (known from the Google search algorithm) that accounts for the temporal evolution of the citation network. By that, it alleviates the bias against younger publications, which have not had the chance to accumulate a lot of citations. It models a researcher’s preference to read papers which received a lot of attention recently. It was evaluated (and vetted) in its performance to predict the ranking of papers concerning their future impact (i.e., citations). For more details, see BIP! glossary and the references therein.

We categorized papers into levels of involvement, based on the degrees of contributorship:

Involvement.Level Definition Publications
Very High (>=3 roles as *lead*) OR (>= 5 roles as (*lead* OR *equal*)) 7
High (1-2 roles as *lead*) OR (3-4 roles as *equal*) 3
Medium (1-2 roles as *equal*) OR (>= 5 roles as *support*) 0
Low All other combinations 0

From 10 submitted papers of Daniel Kristanto, 3 were in the top 10% popularity class of all papers or better.

Internationality and Interdisciplinarity

The analysis is only based on the submitted publications (not the entire publication list) of the applicant. Publication and co-author data is retrieved from the OpenAlex data base. Note that preprints are not indexed by OpenAlex and therefore do not contribute to this analysis.

  • Internationality: All co-authors are retrieved from OpenAlex with their current affiliation. The index is measured by Pielou’s Evenness Index (Pielou 1966) of the country codes of all co-authors. It considers the 10 most frequent country codes.
  • Interdisciplinarity is measured by the Evenness Index of the fields (as classified by OpenAlex) of the publications. It considers the 6 most frequent fields.

The evenness indexes are normalized to a scale from 0 (no diversity, everything is one category) to 1 (maximum diversity, all categories are equally represented). It is computed as a normalized Shannon entropy.

Internationality
Interdisciplinarity
Only within country co-authors
Broad coauthor network from many countries
Single discipline
Many disciplines
15 unique identifiable co-authors:
  • 33% from 3 international countries
  • 67% from the same country
3 primary fields:
  • Neuroscience (7)
  • Engineering (2)
  • Computer Science (1)
Co-authors' Country Code # of Co-authors
DE 10
HK 3
CN 1
TH 1
The main subfields are (multiple categories per paper are possible):
Subfield # of papers
Cognitive Neuroscience 13
Artificial Intelligence 3
Molecular Biology 2
Aerospace Engineering 1
Building and Construction 1
Control and Systems Engineering 1

“Not applicable” justifications

Choosing “not applicable” indicates that an indicator principally cannot be attained by a research output. To avoid bias against certain research fields, the points of such non-applicable indicators are removed from the maximum points and therefore do not lower the computed relative rigor score. However, in order to prevent gaming of this scheme, any “not applicable” claim needs to be justified. Only when the justification is accepted by the committee, the point is removed. With no or insufficent justification, in contrast, the indicator should be set to “not available” (=0 points) and the maximum points are not adjusted. (Note: The latter correction currently needs to be done manually in the json file.)

These are all claims of non-applicability from this applicant:

P_Data_Open

Title Year P_Data_Open P_Data_Open_NAExplanation
Kristanto et al. (2023): An Extended Active Learning Approach to Multiverse Analysis: Predictions of Latent Variables from Graph Theory Measures of the Human Connectome and Their Direct Replication 2023 NotApplicable (automatically added: For reused data sets of other researchers and for ‘no data’, we automatically set P_Data_Open to ‘notApplicable’).

P_OpenMaterials

Title Year P_OpenMaterials P_OpenMaterials_NAExplanation
Kristanto et al. (2023): Cognitive abilities are associated with specific conjunctions of structural and functional neural subnetworks 2023 NotApplicable Materials available at www. humanconnectome.org/ (request required)
Kristanto et al. (2022): What do neuroanatomical networks reveal about the ontology of human cognitive abilities? 2022 NotApplicable Materials available at www. humanconnectome.org/ (request required)
Kristanto et al. (2020): Predicting reading ability from brain anatomy and function: From areas to connections 2020 NotApplicable Materials available at www. humanconnectome.org/ (request required)

P_Theorizing

Title Year P_Theorizing P_Theorizing_NAExplanation
Jacobsen et al. (2024): Preprocessing choices for <scp>P3</scp> analyses with mobile <scp>EEG</scp>: A systematic literature review and interactive exploration 2024 NotApplicable It is a systematic review paper to investigate the variability of preprocessing pipelines in mEEG. There is no specific theory tested.
Kristanto et al. (2024): The multiverse of data preprocessing and analysis in graph-based fMRI: A systematic literature review of analytical choices fed into a decision support tool for informed analysis 2024 NotApplicable It is a systematic review paper to investigate the variability of preprocessing pipelines in fMRI. There is no specific theory tested.
Leung et al. (2025): Re-SearchTerms: A Shiny app for exploring terminology variations in psychology and metascience 2025 NotApplicable This paper investigates the variability of definitons from different terms related to open science and provides an interactive tool to explore the definitons.

RESQUER package version: 0.7.3