Prepared for: The National Register of Health Service Psychologists
Data Source: National Register of Health Service Psychologists AI Survey October 2025 in collaboration with Dr. Adam Lockwood at Kent State University (IRB approved).
Abstract
This report summarizes survey data (N = 528) regarding the use of Artificial Intelligence (AI) by health service psychologists. Results indicate a dichotomy in adoption: while a subset of practitioners utilizes AI to save administrative time (M = 3.89 hours/week), a significant portion remains hesitant due to ethical and privacy concerns. Notably, 13.3% of active users reported entering Protected Health Information (PHI) into AI tools, highlighting a critical need for compliance education.
Method and Participant Characteristics
The sample was comprised of 528 individuals, the majority of whom were licensed psychologists and members of the National Register. The respondents were predominantly female (71.1%) and White (80.2%), with a mean age of 55.26 years (SD = 14.81). The sample had a mean of 25.28 years of practice (SD = 14.21).
Professional demographics indicate that the majority hold doctoral degrees (98.4%) in Clinical Psychology (76.7%). The primary work setting was Private Practice (67.8%), followed by Outpatient Clinics (18.4%) and Hospitals (13.8%). Geographically, respondents represented a diverse range of jurisdictions, with the highest concentrations in California (8.7%), Texas (7.2%), and Illinois (6.5%).
Results
AI Adoption and Usage Patterns
Data regarding the frequency of AI use reveal a discrepancy between personal and professional adoption. While 71.2% of respondents have used AI for personal reasons in the last six months, only 56.9% reported AI use related to their work in the same timeframe. Among those using AI for work,18.8% reporting weekly use and 10.4% reporting daily use.
For the subgroup of active AI users (N = 294), practitioners reported saving an average of 3.89 hours per week (SD = 4.81; Mdn = 2.00)
Table 1
Primary Use Cases for AI in Psychological Practice (N = 294)
| AI Use Case | Frequency | Percent of Users |
| Answer work-related questions | 116 | 39.5% |
| Generate accessible explanations | 103 | 35.0% |
| Generate content for presentations | 98 | 33.3% |
| Write session or progress notes | 93 | 31.6% |
| Generate psychoeducational materials | 74 | 25.2% |
| Generate emails | 72 | 24.5% |
| Generate recommendations | 62 | 21.1% |
Attitudes and Ethical Perceptions
The data suggest significant skepticism regarding the ethical alignment of AI with professional standards (e.g., APA Ethics Code). Respondents were asked if specific AI applications were “ethical.” Participant perceived of tasks as the most ethical with 45.6% reporting that it was ethical to use AI to write goals and 49.0% reporting that it was ethical to use AI for treatment planning. Only 30.2% felt that it was ethical to use AI to write reports and 29.2% to interpret data. Trust levels remain low. When asked if they “trust AI,” 22.2% strongly disagreed and 20.0% disagreed. Only 1.0% strongly agreed.
More data is provided in Table 2.
Table 2
Ethical Perceptions
| Response | Write Goals | Plan Treatment | Write Session Notes | Write Reports | Interpret Data | I trust AI |
| Strongly Disagree | 9.3% | 9.1% | 17.1% | 18.7% | 21.6% | 22.2% |
| Disagree | 13.0% | 12.3% | 15.0% | 16.3% | 16.7% | 20.0% |
| Somewhat Disagree | 11.5% | 11.3% | 12.1% | 17.3% | 15.0% | 20.6% |
| Neither Agree nor Disagree | 20.6% | 18.3% | 17.7% | 17.5% | 17.5% | 17.1% |
| Somewhat Agree | 21.9% | 26.5% | 17.7% | 16.0% | 17.7% | 15.5% |
| Agree | 17.9% | 17.1% | 13.6% | 10.3% | 7.6% | 3.5% |
| Strongly Agree | 5.8% | 5.4% | 6.8% | 3.9% | 3.9% | 1.0% |
Risk Management: PHI and Human Oversight
A critical finding concerns the handling of Protected Health Information (PHI).
- Human in the Loop: Supervision of AI outputs is high. 96.6% of users indicated they review or edit AI-generated content before official use.
- PHI Exposure: Of the active users, 13.3% (n = 39) admitted to entering PHI (e.g., names, diagnoses, notes) into AI tools and 2.4 were “unsure.” 84.4% do not enter PHI into AI models.
- Compliance Uncertainty: Among those who entered PHI, 69.2% said that the AI that they used was HIPAA compliant and 53.8% reported that they have a Business Associate Agreement (BAA) in place with the vendor; 12.8% were “Unsure” if they had a Business Associate Agreement (BAA) or if the tool was HIPAA compliant, and 10.3% admitted the tool they used was not HIPAA compliant and had no BAA.
- Informed Consent: Only 40.9% of AI users obtain informed consent before using AI with patient data, while 26.8% do not. 32.3% said that use was conditional.
Barriers to Adoption
For respondents who did not use AI in the last six months (N = 223), the primary barriers were not technical, but ethical and systemic.
Table 3
Primary Reasons for Non-Adoption of AI (N = 223)
| Reason | Respondents |
| Ethical concerns regarding AI use | 71.3% |
| Client/patient data privacy concerns | 70.0% |
| Concerns about accuracy/reliability | 69.1% |
| Uncertainty about legal implications | 53.4% |
| Concerns about replacing human judgment | 50.2% |
| Concerns about potential bias | 47.1% |
Training Needs and Gaps
There is a significant demand for education. 81.6% of respondents indicated they would like the National Register to provide training in the use of AI. A gap analysis reveals a disparity between resources used versus resources wanted:
- Webinars: 49.1% have used them, but 73.1% want them.
- Videos: Only 10.2% have utilized video training, yet 33.3% desire it.
Discussion
The results suggest that the psychology workforce is currently in a transitional phase regarding Artificial Intelligence. While early adopters are leveraging these tools to gain close to a half a day of work back per week, the majority of the field remains cautious.
The data highlight a specific vulnerability regarding HIPAA compliance. The finding that over 13% of users have entered PHI into AI tools, combined with a lack of universal informed consent, suggests an urgent need for clear guidelines and training focused specifically on data privacy, BAA requirements, and the legalities of AI integration.
Given that “Ethical Concerns” is the primary barrier for non-users, future resources should move beyond mechanical “how-to” training and address the “should I” questions, specifically providing frameworks for ethical decision-making in AI-assisted practice.
For more detailed survey results see: Psychology & AI Survey Dashboard
