Yvette’s inbox dings at 3:02 pm on 13 May 2038. It’s the list of trades executed by the algorithms that day. A quick review raises no red flags, which is good because she is headed into a sign-on meeting with a new client.
“I need this money in the next four years, and I’m worried about buying stocks while they are at all-time market highs,” Alex, the new client, explains. “And I really don’t want to invest in tobacco or marijuana companies.”
“I’ll include all of that in your investment policy statement,” Yvette says. “I should have the draft to you by tomorrow. Do you have any other concerns?”
The meeting ends and Yvette returns to her desk. The IPS is almost finalized. She just adds the environmental, social, and governance (ESG) restrictions and forwards it to Alex for electronic signature.
Yvette opens her coding integrated development environment (IDE) and revises the algorithm she has written for Alex, excluding tobacco and marijuana companies from Alex’s personal investment universe. Though some of these companies are included in the investment universe of Yvette’s firm, such client-instituted restrictions are fairly common. At 5:38 pm, Yvette forwards Alex’s final algorithm and IPS to compliance for review and then gathers her belongings to head home for the day.
It wasn’t always this way. Firms used to simply run model portfolios: monolithic “boxes” that approximated client needs. Financial planning was more customized, but running separate portfolios for each client was a sure ticket out of business. The calculation and trade-execution burden alone shut out any possibility of customized, client-by-client solutions. Unless the client had an account large enough to justify the fee.
Algorithmic solutions changed all that. Firms could now focus on broad, macro-level due diligence, while wielding their expertise to build scalable, repeatable systems. Each firm had its own take on how markets worked, its own machine-learning models, and its own money management philosophy. These proprietary techniques became the firms’ master algorithms.
But every client is different, so the application of that master algorithm became customized by a client-level algorithm developed by the portfolio manager in consultation with the client. Since most clients can’t read code, the PM’s primary role is now that of a “translator” of sorts, converting the client’s needs and wishes into this custom algorithm. The algorithm executes the plan, but the PM has to build it. As we all well know, these firms are now known as “algocen firms” — a portmanteau for algorithmically centered firms.
Ten years earlier, when fee compression threatened the careers of human advisers, the automation revolution appeared to sound their death knell. Why would a client pay in excess of 1% per annum when the same service could be automated for a quarter of that cost?
Yet, counter to the prevailing wisdom of the time, the leverage offered by technology reversed the trend toward fee compression and offered unprecedented scalability — Luddite firms notwithstanding. As it turns out, clients are willing to pay for better solutions and the hyper-customization that can only be attained through technology.
This move toward hyper-customization and scalability should have been led by robo-advisors. Indeed, that was the expectation. But robo-advisors were built by software engineers, not financial advisers. That was their fatal flaw. Oddly, they delivered the same product the industry had always delivered: They revolutionized the platform rather than the service. Though they built efficient and scalable solutions, they could not meet the innate need for humans to interact with one another — especially when solving for something as critical as retirement.
Moreover, software engineers simply did not understand the business of wealth management. They saw it as a strictly quantitative, academic exercise and left no room for elements that were more . . . human.
Coding came to the masses. As higher-level programming languages, online tools, and courses proliferated, coding’s alchemical mystique emerged from the dim backroom. Code, then, became a way to better execute the models used by portfolio managers — a sort of secondary skill, like spreadsheets or Bloomberg access.
But it was more than that. Code ultimately became a way to institutionalize the “house view.” Differentiating firms was now less about hiring the best talent and granting them carte blanche, and more about hiring the best talent in specific roles, roles that infused algorithms with a unique view of markets. That view, then, could proliferate across a firm with little to no marginal cost, constituting a boon to firms and their clients.
But not so much to portfolio managers. Once a prized thought leadership role, portfolio management is now more mundane, more cog-like, and less creative. Of course, that’s preferable to being swept into the dustbin. Portfolio management could have easily gone the way of the long-haul trucker.
Compliance is another metamorphosed role. Compliance review is now much more of a code-review function that ensures the human-language IPS matches the computer-language IPS. Many of the traditional functions still remain, of course. There is now a greater need for regulatory interaction, especially since regulators have been generally slow to understand and oversee this algorithmic migration. Compliance, sometimes more than portfolio management, has become a translation and expositional role focused on what the master algos are doing and why they are doing it.
The regulators have made their share of changes. The SEC recently announced a plan to build a code-review division, sending a strong signal to algocen firms that this isn’t the Wild West anymore. Someone will be looking over their shoulders.
To be fair, the recent scandal of “that algocen” — we all know the one — demonstrated how much money a few well-placed people can bilk from unsuspecting investors and their unsuspecting portfolio managers. The architects of the firm’s master algorithm structured a small subroutine — only three lines of code! — to front-run large client trades. Trades over their “large-enough” threshold triggered a conditional statement in the firm’s master algo that paused execution for 100 milliseconds and exported the trades to an alternate algorithm running on a separate server. That algo then bought the securities, only to sell them two seconds later, after the client’s fulfilled purchase had moved the price of the security slightly higher. While the scheme netted just a few pennies of profit per trade, given the trading volume, all those pennies added up to a considerable sum.
And the alleged conspirators would have gotten away with it. They were only caught because one was going through a messy divorce, and their spouse demanded half of the offshore account that held the ill-gotten gains.
As the SEC alleges, this activity went on almost five years because no one ever reviewed the master algorithm — except the perpetrators. And even if someone had, they probably wouldn’t have found and flagged those three lines of code. Or the conspirators would have just deleted them ahead of the review.
Academic research has helped in this regard. Reviewing millions of lines of code inconspicuously is a monumental task, though so was reviewing millions of firm documents before digital storage and search functions were invented. Even so, the need to spot fraud-facilitating code has led to countless papers and theoretical breakthroughs from the academic community. We are entering an age when computer scientists will win Nobel prizes in economics. And for good reason: Their tools will help prevent the defrauding of the common investor.
Other than volume spikes, the algocen’s emergence as the dominant interface for investors has had little effect on market dynamics. Some expected irrational behavior to wane as computerized trading eliminated the cognitive errors to which humans are prone. That hasn’t happened. Herding behavior, momentum, and trading-range breaks are still persistent anomalies in an otherwise efficient and rational market. It may be that the expectation of these anomalies, as discussed in recent studies, has been enough to maintain them, as though the remnants of a bygone era were hard-coded into markets. Ironically, the algorithmic revolution may have calcified, rather than cut out, our cognitive biases.
The near-extinction of the exchange-traded fund (ETF) is not directly related. After all, passive investing’s meteoric rise took place before the algocen revolution. At first, ETFs were a preferred investment vehicle of algocens, and so they did exacerbate the problem. Their specified exposure and low cost made them an easy choice, and so they represented the lion’s share of algocen portfolios. That ETFs would take the walloping they did was difficult to anticipate at the time.
There were clues, of course. The 24 August 2015 “flash crash” was a big one. Many blue-chip stocks had hit their circuit-breaker limits and suspended their trading. Because the authorized participants could not arbitrage between stocks and the ETF portfolio, many ETFs went into freefall — massively diverging from their benchmarks. Once trading resumed, the arbitrageurs pushed the ETF prices back up again. Much of the stress had dissipated by the close, and since they hadn’t seen the intra-day price movement, most observers just thought it was a bad day on Wall Street. In the end, as we now know, it was a harbinger.
Speculative history is a dicey business, but if trading had not resumed in those stocks and the market had closed with those passive funds as displaced as they were, the recent crisis could very well have been avoided — or at least mitigated. Professionals, individual investors, and regulators would have raised many more questions. Perhaps that one bad day could have prevented a crisis. We will, of course, never know.
To be fair, experts had issued warnings, cautioning against the overuse of passive investment funds, especially in illiquid market segments. Those are the very market segments that have borne the brunt of the industry’s calamity.
The obvious drawbacks notwithstanding, the algocen’s biggest value add has been the increasingly tailored approach available to individual investors. Retail investors now have a level of customization which, just a few years ago, was accessible only to the wealthiest. The ability to buy and sell through various market dynamics, to account for and curb portfolio losses, and to include or exclude particular securities is only available because of cheap computing power and the relative ease of coding. This tailoring gives investors the best chance to achieve their financial goals without sacrificing their values.
Ultimately, that will be the legacy of the algocen firm.
Yvette’s email dings at 9:32 am the next day. Compliance has finished their review of her client’s IPS and has suggested a few minor code revisions. She inputs them and forwards Alex’s custom algo to the firm’s architects, then emails her client that they could be up-and-running as soon as tomorrow pending her electronic signature.
Yvette opens her IDE to finish the IPS for the foundation that just signed on. Old-school as it is, the foundation has requested a quadratic utility function with a risk-aversion parameter . . .
If you liked this post, don’t forget to subscribe to the Enterprising Investor.
All posts are the opinion of the author. As such, they should not be construed as investment advice, nor do the opinions expressed necessarily reflect the views of CFA Institute or the author’s employer.
Image credit: ©Getty Images / Yuichiro Chino
Professional Learning for CFA Institute Members
CFA Institute members are empowered to self-determine and self-report professional learning (PL) credits earned, including content on Enterprising Investor. Members can record credits easily using their online PL tracker.