Buletin Ilmiah Sarjana Teknik Elektro ISSN: 2685-9572
Maintaining Empathy and Relational Integrity in Digitally Mediated Social Work: Practitioner Strategies for Artificial Intelligence Integration
Yih-Chang Chen 1, Chia-Ching Lin 2
1 Department of Information Management, Chang Jung Christian University, Tainan 711, Taiwan
2 Department of Finance, Chang Jung Christian University, Tainan 711, Taiwan
ARTICLE INFORMATION | ABSTRACT | |
Article History: Received 13 March 2025 Revised 27 April 2025 Accepted 30 April 2025 | This study addresses the critical challenge of preserving relational integrity in social work practice within artificial intelligence (AI)-enhanced environments. While AI technologies promise operational efficiency, their impact on empathy and human connection in social work is not fully understood. This research aims to explore how social workers maintain relational integrity when interacting with clients through AI tools, providing practical strategies and theoretical insights. The research contributes to the field by proposing a relational framework for AI integration in social work practice, emphasizing human-centered principles. The study utilizes a qualitative phenomenological approach, drawing on 24 licensed social workers from diverse sectors (e.g., child welfare, elder care, and mental health) in three urban areas known for AI adoption. Data collection involved semi-structured interviews and artifact analysis, including AI interface screenshots and decision-making protocols, to capture practitioner experiences. Findings reveal three key themes: reframing empathy in digital interactions, AI as a dual partner and adversary, and ethical tensions. Results indicate that video calls and visual aids are crucial for preserving empathy, while social workers employ proactive strategies to manage AI’s limitations. The study highlights the need for clear guidelines, interdisciplinary collaboration, and training to ensure AI supports relational practices rather than replacing them. These findings have significant policy and practice implications, offering a foundation for future research and AI tool development in social services. | |
Keywords: Digitally Mediated Social Work; Empathy Preservation; Artificial Intelligence Integration; Relational Integrity; AI-Enhanced Social Services | ||
Corresponding Author: Yih-Chang Chen, Department of Information Management, Chang Jung Christian University, Tainan 711, Taiwan. Email: cheny@mail.cjcu.edu.tw | ||
This work is licensed under a Creative Commons Attribution-Share Alike 4.0 | ||
Document Citation: Y.-C. Chen and C.-C. Lin, “Maintaining Empathy and Relational Integrity in Digitally Mediated Social Work: Practitioner Strategies for Artificial Intelligence Integration,” Buletin Ilmiah Sarjana Teknik Elektro, vol. 7, no. 2, pp. 111-121, 2025, DOI: 10.12928/biste.v7i2.13008. | ||
Social work has long been defined by its relational essence, where empathy, human connection, and interpersonal understanding are central to effective practice [1]-[8]. With the rapid advancement of Artificial Intelligence (AI) technologies — such as predictive analytics, automated case assessments, chatbots, and virtual assistants — the traditional landscape of social work is undergoing a significant transformation [9]-[19]. These innovations promise increased efficiency, improved decision-making, and optimized resource allocation. However, the introduction of AI tools also presents critical challenges to maintaining the relational aspects that are essential to social work practice, particularly empathy, trust-building, and meaningful human engagement.
The growing reliance on digital platforms, which has been dramatically accelerated by the COVID-19 pandemic, amplifies these challenges. Practitioners now depend heavily on technology to bridge physical distances, but there remains substantial uncertainty regarding the impact of digital mediation on relational quality and client outcomes. While AI-driven tools have been shown to enhance operational tasks, they also raise concerns about impersonal interactions, the erosion of client autonomy, and the potential for algorithmic bias [20]-[32]. Despite these issues being widely acknowledged in the literature, there is still a limited exploration of how frontline social workers can preserve relational integrity and empathy in their practice, especially in digitally mediated environments.
This study addresses this significant research gap by examining the strategies social workers use to maintain relational integrity in AI-enhanced practice settings. Specifically, the research aims to:
The research contributes to both theory and practice by providing a nuanced understanding of how AI can be integrated into social work without undermining its relational core. By developing and proposing a relational framework for AI integration, this study aims to guide policymakers, social work practitioners, and AI developers in fostering ethical, human-centered innovations in social services. This framework outlines the key strategies for balancing technological efficiency with empathetic engagement, ensuring that AI tools enhance, rather than replace, the relational dimensions of social work.
Ultimately, the contribution of this research is twofold: it offers a critical theoretical perspective on the intersection of AI and relational practice, and it provides practical recommendations for maintaining the human element in AI-driven social work environments. By examining both the opportunities and challenges presented by AI, this study aims to foster a more balanced, ethical, and relationally grounded approach to the future integration of technology in social work practice.
This study utilized a qualitative phenomenological approach informed by Heideggerian phenomenology [33]-[40], specifically chosen to deeply explore and articulate the lived experiences and subjective interpretations of social work practitioners interacting with AI-enhanced environments. Phenomenology, particularly Heideggerian tradition, emphasizes individuals’ existential contexts and subjective experiences, making it uniquely suited for investigating how social workers interpret and assign meaning to their interactions within technologically mediated settings [41]-[49]. Although phenomenology was the sole methodological approach, limiting generalizability, it was intentionally selected to capture the complex, nuanced experiences critical to the study’s objectives.
Participants were selected through a purposive sampling strategy from social service agencies across three major urban areas known for their pioneering adoption of AI technologies. These urban centers were specifically chosen due to their advanced integration and frequent utilization of AI in social work, providing ideal contexts for capturing diverse and rich experiences related to human-AI interactions. A total of 24 licensed social workers from various sectors — child welfare, elder care, and mental health services — were recruited, ensuring comprehensive coverage of experiences and perspectives regarding AI’s role in social work. The rationale for choosing 24 participants was based on principles of data saturation, achieved through iterative analysis until additional interviews yielded no new substantive themes or insights. Despite potential selection bias inherent in focusing on technologically advanced areas, the study prioritized depth and richness of qualitative data over broader representational claims.
Participants in this study comprised 24 licensed social workers purposefully selected to represent diverse professional backgrounds, ensuring comprehensive insights into the integration and use of AI technologies within social work. These individuals were recruited from three distinct service domains: child welfare, elder care, and mental health services. The choice of these sectors ensured a wide-ranging perspective on AI’s integration across various critical areas of social work practice.
The purposive sampling strategy, a non-probabilistic technique suitable for qualitative research, was employed explicitly to identify participants capable of providing in-depth, contextually rich information aligned with the research objectives [50]-[58]. Inclusion criteria encompassed professional licensure, active employment within AI-integrated social service environments, and a minimum of two years of experience using digital mediation tools. These criteria ensured that participants could offer authentic, experience-based insights critical to addressing the research questions.
The decision to include 24 participants was guided by achieving data saturation, verified through iterative analysis until additional interviews ceased to yield new substantive themes or insights. Additionally, the three metropolitan regions from which participants were selected were specifically chosen due to their advanced adoption and sophisticated integration of AI technologies in social work. This focus enabled the study to deeply explore contexts at the forefront of technological innovation, acknowledging, however, that it introduces potential selection bias regarding broader generalizability.
Ethical standards were rigorously maintained throughout participant recruitment, ensuring transparency about participant rights, voluntary involvement, confidentiality, and the right to withdraw at any stage without repercussions. This rigorous ethical approach safeguarded participants’ autonomy and enhanced the validity and integrity of the collected qualitative data.
Data collection involved comprehensive semi-structured interviews specifically designed to elicit rich, detailed narratives about participants’ experiences, perceptions, strategies, and challenges in maintaining relational practices within digitally mediated environments [59]-[63]. Interviews were conducted via Zoom and lasted between 60 and 90 minutes, allowing sufficient time for participants to articulate nuanced insights into their professional interactions. Explicit participant consent was obtained prior to audio-recording each session, and all interviews were subsequently transcribed verbatim, ensuring accurate data representation and reliability in subsequent analyses.
Artifact analysis supplemented interview data, providing additional contextual understanding and concrete evidence of AI interactions within social work practice. Artifacts included screenshots of AI interfaces, workflow diagrams, AI-driven decision-making documentation, and specific examples of communications facilitated by AI tools. These artifacts were systematically integrated and coded alongside interview data to enhance the triangulation process, thereby increasing the robustness, credibility, and validity of the study’s findings.
Throughout data collection, strict adherence to ethical protocols emphasized informed consent, participant confidentiality, data security, and anonymity. Data was securely stored with access restricted solely to authorized research team members, further safeguarding the integrity and confidentiality of the information collected.
The analytical framework employed Braun and Clarke’s six-phase thematic analysis method to systematically identify, code, and interpret emerging themes [64][65]. These phases included: (1) data familiarization through multiple readings and preliminary notes; (2) initial code generation through systematic coding of interview transcripts and artifacts; (3) theme identification by clustering related codes into potential themes; (4) reviewing themes iteratively to assess coherence and consistency; (5) clearly defining and naming each theme based on explicit and distinct criteria; and (6) producing a detailed analytical narrative.
NVivo qualitative software facilitated this rigorous analytical process, enhancing depth and transparency by enabling detailed documentation, systematic coding comparisons, and cross-validation across multiple data sources. The framework specifically accounted for interactional complexities of AI-mediated social work practices, including initial engagement, AI-based assessments, practitioner-client digital interactions, and human-driven adjustments or overrides. Inter-coder reliability was maintained through regular research team meetings, independent coding verifications, and collaborative consensus-building, ensuring robust and consistent analytical outcomes.
The system flow diagram (Figure 1) visually represents the iterative and dynamic nature of AI-mediated social work practice, emphasizing human oversight and judgment throughout the process. It begins with case initiation, where social workers collect initial client information and concerns. AI-driven assessments and preliminary analyses subsequently inform practitioners about potential client needs and intervention priorities. Critically, this automated analysis is reviewed by social workers through direct, empathy-driven interactions with clients via digital platforms. Practitioners retain ultimate decision-making authority, actively adjusting or overriding AI recommendations as necessary. Finally, the process involves planning and executing tailored interventions, followed by continuous follow-up and reassessment activities. This iterative depiction accurately captures the complexities and nuances inherent in real-world social work decision-making processes, highlighting the indispensable role of human relational competencies alongside technological tools.
Figure 1. System Flow Diagram
NVivo qualitative analysis software was employed comprehensively throughout this study to manage, code, and systematically analyze the extensive qualitative data derived from both interviews and artifact reviews. The software provided significant analytical advantages beyond mere organizational functions, facilitating deeper interpretative insights and enhancing methodological rigor [66]-[71]. Specifically, NVivo enabled sophisticated cross-referencing of data sources, pattern recognition, thematic consistency checks, and rigorous tracking of analytical decisions. These capabilities ensured accuracy, consistency, and transparency in thematic identification and facilitated inter-coder reliability through clearly documented analytical pathways.
Zoom video conferencing served as the primary platform for conducting and recording interviews. This choice allowed for secure, high-quality, real-time communication with participants located across different geographical regions. Interviews were digitally recorded via Zoom’s built-in recording function, producing reliable audio quality suitable for accurate transcription. Transcription software complemented Zoom recordings, efficiently converting spoken words into verbatim text, thereby increasing the precision of subsequent analyses.
Ethical considerations related to data privacy, confidentiality, and security were paramount throughout the study. All digital tools, including NVivo and Zoom, were utilized in strict compliance with institutional ethical guidelines, and data were securely stored with restricted access to authorized personnel only. These rigorous practices ensured robust, ethically sound, and reliable research outcomes.
This study identified three core themes integral to understanding relational practices within digitally mediated social work environments: (1) reframing empathy in digital interactions, (2) AI as both a supportive partner and a challenging adversary, and (3) ethical tensions and role strain. These themes illuminate critical dimensions of social workers' lived experiences with AI technologies.
Participants consistently emphasized the importance of reframing empathy in digitally mediated interactions. Social workers described several innovative digital strategies designed to enhance emotional connectivity, including structured video conferencing, tailored AI-scripted communication, and visual aids that foster emotional understanding and trust [72]-[74].
Table 1 presents participant evaluations of various digital empathy strategies. Scheduled video calls achieved the highest effectiveness (mean score = 4.5, SD = 0.5, n = 24), suggesting that face-to-face visual interaction significantly enhances emotional understanding and builds stronger practitioner-client relationships compared to text-based methods. Personalized AI scripts received lower ratings (mean = 3.7, SD = 0.8), highlighting participants’ concern about the lack of nuanced emotional responsiveness without human oversight. Visual aids, rated 3.9 (SD = 0.6), effectively improved client engagement through clarity and transparency. However, some practitioners expressed caution, emphasizing the need for professional interpretation to prevent oversimplification of complex client issues.
Tabel 1. Participant Ratings of Digital Empathy Strategies
Digital Empathy Strategy | Reported Effectiveness (Mean Score, 1–5) | Key Benefits and Limitations |
Scheduled video calls | 4.5 | Enhanced relational depth; supports nuanced emotional interaction. Requires stable technology infrastructure. |
Personalized AI scripts | 3.7 | Efficient for routine communication; risks losing emotional nuance without human oversight. |
Visual aids (client charts) | 3.9 | Clarifies complex information; fosters transparency and client trust. |
Participants perceived AI as both beneficial and problematic. While acknowledging substantial advantages in operational efficiency, accuracy in initial case assessments, and improved resource management, practitioners expressed significant concerns about AI potentially undermining professional judgment and autonomy.
Figure 2 illustrates the frequency of AI tool usage across three social work departments: Child Welfare, Mental Health, and Elder Care. In Child Welfare, the AI-driven Risk Score tool was most frequently employed (n = 22), reflecting a greater reliance on predictive analytics to rapidly assess risks and prioritize interventions. Mental Health practitioners frequently utilized Chatbots (n = 18), supporting ongoing client interaction and immediate engagement. In Elder Care, Case Schedulers were the most frequently adopted tool (n = 20), emphasizing the need for organized, automated management of routine tasks and appointment scheduling. These departmental differences underline distinct practical priorities and demonstrate the varied perceptions and reliance on AI tools across social work practice domains.
Figure 2. Frequency of AI Tool Use by Department
Ethical considerations featured prominently among participant concerns. Practitioners reported significant anxieties related to depersonalization, transparency in AI decision-making, and potential breaches of client confidentiality. While prior literature has identified similar ethical risks [20]-[22], participants in this study uniquely described specific adaptive strategies to mitigate these issues, such as implementing double validation procedures (i.e., seeking colleague consultation on AI recommendations) and regular client check-ins. Social workers highlighted that while AI tools can increase efficiency, careful management is required to preserve the integrity of relational practices. For instance, one participant noted:
“When AI systems suggest an intervention, I always cross-verify with a colleague before finalizing decisions. This double-validation ensures that we’re making ethically sound and empathetically grounded choices” |
This study provides a more nuanced understanding of AI’s role in social work, specifically regarding the human-AI relationship. Comparisons with earlier research [20]-[22] show that while previous studies have largely focused on the ethical risks and operational limitations of AI, this study highlights adaptive, human-centered strategies employed by social workers. These strategies — such as integrating AI tools without sacrificing empathy — present a dynamic model of how AI can function as both a partner and a tool for professional enhancement.
Unlike studies that express skepticism towards AI (e.g., Afrouz & Lucas [2]), participants in this study demonstrated a proactive approach, engaging actively with AI tools while still maintaining critical oversight (as shown in Figure 2). They emphasized human judgment in interpreting AI recommendations and handling algorithmic limitations, a more balanced view compared to the more cautionary tones in prior research.
Additionally, this research extends discussions on algorithmic transparency by showing how social workers leverage visual aids and client-centered communication to ensure AI decisions are understandable and actionable. By providing context and clarity, social workers bridge the gap between AI-mediated recommendations and the relational aspects of practice, enhancing trust and transparency.
Furthermore, the findings align with and extend Steyvers and Kumar’s work [11], which highlighted the importance of AI transparency. Our study adds depth by showing operational strategies used by social workers to incorporate client feedback and collaborative decision-making, thereby enhancing the ecological validity of AI applications.
Overall, this study illustrates how social workers creatively navigate AI integration, maintaining a deliberate balance between technological efficiency and relational depth. These findings contribute to an evolving discourse on the role of AI in relationally-driven professions, distinguishing current practices from the more cautious or critical stances taken in earlier studies.
The methodological strengths of this study include its phenomenological approach, capturing detailed lived experiences from practitioners. The triangulation of interview data and artifact analysis — examining AI interface screenshots, workflow documentation, and AI decision-making protocols — provided additional context and credibility. However, limitations include potential biases stemming from self-reported effectiveness measures and purposive sampling from technologically advanced urban regions, which may limit generalizability to less-equipped settings.
Additionally, while thematic saturation was ensured through iterative analysis and cross-checking among researchers, the qualitative nature of the study inherently poses risks of interpretative researcher bias. Future studies should incorporate quantitative validation to complement qualitative insights and broaden sampling across diverse geographic and socioeconomic contexts to enhance representativeness and applicability.
In conclusion, the findings underscore the complexity of integrating AI into relationally driven professional contexts and illustrate practical approaches social workers utilize to maintain empathy and ethical standards amidst technological mediation. Future policy development and research should prioritize clear guidelines for ethical AI use, interdisciplinary collaboration, and comprehensive training programs that equip social workers with essential skills for navigating human-AI interactions. By emphasizing the preservation of relational integrity in the era of digital transformation, this study offers a valuable foundation for the advancement of AI-enhanced practices in social work.
This research makes significant contributions to the understanding of the intersection between relational practices and the integration of artificial intelligence (AI) in social work. It adds to the theoretical and practical knowledge in the field by addressing the complexities of maintaining relational integrity and empathy in AI-enhanced social work environments.
First, the study identifies practitioner-driven strategies to preserve empathy and relational integrity within AI-mediated settings. These practical strategies, which include tailored AI scripts, structured video calls, and the use of visual aids, fill a critical gap in existing literature, which predominantly focuses on technological capabilities and ethical concerns, but rarely provides insights into day-to-day relational adaptations.
Second, the research introduces a unique relational framework for integrating AI technologies into social work practice. This framework emphasizes the need to balance technological efficiency with the core relational aspects of social work. It ensures that AI tools enhance, rather than detract from, empathetic and client-centered practices by highlighting the importance of human oversight and professional judgment.
Third, the study offers valuable perspectives from practitioners and their adaptive strategies. These findings provide actionable insights for policymakers, technology developers, and educators within the social work field. The recommendations from this research underline the importance of maintaining relational ethics, human judgment, and professional oversight when deploying AI systems.
Collectively, these contributions offer a deeper understanding of the dynamic relationship between human practitioners and AI, contributing to more informed, ethically grounded, and relationally enriched approaches to AI integration in social services.
The findings of this research hold significant implications for both policy and practice in social work:
These implications will foster ethical, human-centered AI integration and help social workers navigate the complexities of technology in their practice while maintaining professional and relational standards.
Future research should focus on several key areas to build upon the findings of this study and further enhance the integration of AI in social work practice:
These research directions aim to address current knowledge gaps and provide actionable insights into the long-term impact of AI on social work practice.
In conclusion, this study emphasizes the critical role of maintaining human empathy in the digital era of social work. While AI technologies can significantly enhance operational efficiency and decision-making, they should never replace the human relational core of social work. By identifying practical strategies for maintaining empathy, promoting human-AI collaboration, and addressing ethical tensions, this research offers a comprehensive framework for integrating AI into social work practice. The study’s findings contribute to the ongoing discourse on AI in human-centered professions and provide valuable insights for the future development of AI tools in social services.
REFERENCES
AUTHOR BIOGRAPHY
Maintaining Empathy and Relational Integrity in Digitally Mediated Social Work: Practitioner Strategies for Artificial Intelligence Integration (Yih-Chang Chen)