Dancing to the (Algo)rithm?

Photo of Morgan T. Sammons, PhD, ABPP, who wrote this article.

Telehealth is suddenly a hot property. According to the Global Telemedicine Market Outlook for 2020, the market for telemedicine is projected to grow from $26B in 2018 to over $40B in 2020, although these estimates seem to be at the high end of projections. The investment website Investopedia ranked the firms CareClix, Doctor on Demand, myTelemedicine, Teladoc, and iCliniq as proven industry leaders in telemedicine. All of these sites offer, with varying degrees of access, almost immediate virtual consultation with a healthcare provider via smartphones, specialized apps, or websites. Some connect with billing portals to coordinate covered payments for services.

As usual, the market for psychological services is much smaller than that for medical services, but as you are all aware, telepsychology has captured the interest of mental health professionals and entrepreneurs for some time. Recently, a New York company called TalkSpace announced a partnership with Optum Behavioral Health (a UnitedHealthcare subsidiary, and the subject of recent high-profile lawsuits as discussed in my May 2019 column) to provide covered services to two million Optum insured patients. The company has raised capital of more than $100M to fund expansion, and reportedly contracts with over 5,000 therapists. Patients can text or leave voice/video messages for a flat fee of $49/monthly. Live video interactions cost more.  

Predictably, enthusiasm is running ahead of practical realities in this sphere. As I have repeatedly observed in these columns, most cell phone platforms are not HIPAA compliant, and the legal atmosphere surrounding telehealth remains humid with issues relating to privacy, data ownership, and provider malpractice. But the investment community seems to believe that these issues will be satisfactorily resolved in favor of expanded remote healthcare services. They are likely right, if for no other reason that it is a safe bet to presume that most individuals seeking medical or psychological services have a more nuanced view of privacy than regulators do. After all, billions of users of Facebook and other apps share intimate personal details on the internet and have de facto voted with their thumbs to surrender privacy rights to large corporations. While lawsuits over telehealth privacy violations are almost guaranteed, it is reasonable to presume that large corporate sponsors like Optum will view these as the cost of doing business. This is a proven model in the pharmaceutical industry, where manufacturers are willing to risk multimillion dollar lawsuits or regulatory penalties because the profit margins for successful drugs are so high (although recent lawsuits pushing the manufacturers of Oxycontin, Purdue Pharmaceuticals, to the brink of bankruptcy remind us these strategies are not without significant risk).

Psychology is certainly sitting up and paying attention. Our involvement in telehealth has spanned decades, and as most of you know the first documented instance of distance mental health service provision was group therapy via closed-circuit television almost 60 years ago. I was a provider of telepsychology services 20 years ago, and likely still one of the few psychologists to provide combined pharmacological and psychological therapy across the Atlantic Ocean–from the National Naval Medical Center in Bethesda, Maryland to the hospital at Naval Air Station, Keflavik, Iceland. Much has changed, but the basic principles of patient-therapist interaction have not, nor has the complexity of regulatory oversight of distantly provided health services.

Just two months ago, a major milestone for the profession was reached when the Association of State and Provincial Psychology Boards (ASPPB) announced the successful formation of an interstate compact governing provision of telepsychology services between member jurisdictions – PSYPACT. The compact was ratified in April when a total of eight jurisdictions (IL, IA, NE, CO, UT, NM, AZ and GA) passed legislation enabling it (for some obscure reason, at least seven states or territories must be signatories to any interstate compact in order for it to go into effect). Once the signing jurisdictions form a commission and agree on regulations and procedures, it will become operational, likely sometime in 2020 or 2021. Other member states may also join in. PSYPACT will allow unlimited telepractice between member jurisdictions and up to 30 days of face-to-face practice annually for registered psychologists in compact states (read the legislation at

In many respects, this is quite an accomplishment. Interstate compacts governing healthcare practice are not common, since most states jealously guard their oversight of licensed professionals. PSYPACT could, in the long run, serve as a steppingstone to a highly desired outcome–uniform nationwide standards for licensing and practice for all psychologists, not just those who engage in telepsychology. At present, however, it seems that the main question is if PSYPACT will make a difference to commercial providers of telebehavioral or even telemedicine services. These companies are already doing multimillion dollar businesses in all states regardless of the existence of interstate compacts. By characterizing their offerings as something other than psychological service provision, or by using providers who have more permissive licensure structures (at least for certain activities), commercial telehealth entities have adopted the “any willing provider” standard that characterizes much of managed behavioral healthcare. Ask yourself: Would you rather your patients see someone whose license and practice has been verified by collaborating state boards, or someone who has signed up with minimal training or scrutiny with a commercial telehealth firm? For us, the answers are clear. But we must continue to work with regulators, legislators, and patients to ensure they share our understanding.

As important as telepsychology standards are, my attention has been consumed by another phenomenon that I believe will have an even more profound influence on how we practice than telepsychology: the patenting of behavioral interventions. Those paying attention will have noticed a significant uptick in marketing such interventions, and the FDA is now exerting regulatory approval over certain Mobile Medical Applications, or MMAs (see for information on the MMA process). Briefly, in 2013 the FDA initiated an approval process for MMAs. This process exerts varying degrees of scrutiny over healthcare applications. The highest scrutiny placed on devices that measure or monitor physiological processes such as heart rhythm or serum glucose, because accuracy of measurement is essential to ensure patient safety and to dictate proper therapy. Devices that collect data or impart instructions regarding mental disorders, such as monitoring for symptoms of depression and suggesting interventions, have a lower degree of regulatory scrutiny. Devices that merely provide information or simplify care processes, such as schedulers, are not regulated as MMAs.

I am sure that none of you will be shocked to learn that many of these behavioral interventions are being developed and submitted for approval by pharmaceutical firms. One current app, for example, is aimed at improving patient adherence to medication, and is being marketed to pharmaceutical firms with the explicit goal of improving return on investment (more adherent patients means more pills consumed, after all). MMA approval can also extend patent protection on devices, in the same manner that drug approval extends patent protection on molecules. And an MMA or low-risk approved medical device that must be prescribed, such as the recently released Cervella, a transcranial electrotherapy stimulator in a wearable headset has a much better chance of being covered by insurors (Cervella transmits microelectrical pulses via noise cancelling headphones designed to be wearable during normal activities).

Non-medical device apps blur the line between a traditionally administered therapy and a patented regimen. Pear Therapeutics, for example, in partnership with the pharmaceutical company Novartis, is developing a CBT type intervention for patients with schizophrenia, designed to enhance response to antipsychotic medications (and of course simultaneously improve adherence). Other apps use components of recognized psychological treatments to enhance management of ADHD, major depressive disorder, and Alzheimer’s disease.

Such apps have obvious appeal to Big Pharma. The psychotropic drug pipeline has slowed to a trickle, and there is increasing awareness of the limited therapeutic benefits of drug treatment. An all-in-one package that combines behavioral or psychological interventions with some form of outcomes assessment is likely to provide stronger evidence of a drug’s effectiveness, and thereby increase sales.   This isn’t necessarily bad. The implicit acknowledgement that drugs should be combined with psychosocial or behavioral interventions is overall positive – and long overdue. Also, since most patients are treated not by specialists but in primary care, where the chances of receiving drug therapy are high and the chances of receiving psychotherapy low, a mechanism to expose patients to non-drug treatments could, perhaps, expose more patients to efficacious interventions.

But at the same time numerous risks can be envisioned. Here are two very pragmatic ones: Will insurors only pay for an FDA sanctioned treatment, and bypass equivalent treatments that aren’t packaged into an approved app? Will they ignore providers who are not authorized to “prescribe” a certain app? Will patients taking certain types of medication be required to also purchase and utilize an app, where they may have little or no control over what data are collected and what patient information is being shared with app developers?

While not downplaying the significance of these pragmatic concerns, again a larger concern looms. At heart, all of these apps are built around algorithms that make fundamental assumptions about all patients and all disorders. Mr X., with a diagnosis of major depression, is anticipated to respond to an intervention in much the same way as is Mrs. Y, who receives the same intervention for the same diagnosis. And, just as we measure response to drug treatment, progress in algorithmic treatment is also likely to be measured in nomothetic ways – I imagine by a reduction in scaled scores on standardized tests of depression. A patient who fails to achieve algorithmically determined treatment goals may quickly be viewed as a treatment failure. Apps, for all their sophisticated technology and the power of their data collection, basically assume that humans behave exactly like the instructions in the user guide say they will. Real world experience repeatedly demonstrates the fallibility of this assumption. We seem to continually forget this lesson: It is less the precision of an intervention than the quality of the relationship between the patient and the provider that makes the most difference.

I’m not sure I can foresee a future where patients form working therapeutic relationships with their apps (in my own experience, Siri and I broke up after a very brief courtship. Alexa may be a nice, if quite intrusive robot, but a psychologist she is definitely not). Patients may profess to love their apps, but I have yet to see one that loves them back.

Copyright © 2019 National Register of Health Service Psychologists. All Rights Reserved.

1200 New York Ave NW, Ste 800

Washington DC 20005

p: 202.783.7663

f: 202.347.0550

Endorsed by the National Register