Going Back to the Future with Semantic Analytics
Semantic analysis is the way forward for financial firms looking to find a competitive edge in the era of big data, says John Blossom, president of Shore Communications.
In a recent webinar, Blossom presented his ideas surrounding reinventing financial insight, where he discussed the “value trap” that has been created in financial markets over the years with automated trades taking precedent over intelligent insights.
“We really had things clicking with all the automation we got in there with financial information,” said Blossom in explaining the trap. “It, in a sense, created a monster of its own because it commoditized the very thing that was creating the value in automation – everybody at all those turrets around the world were looking at the same data points and looking at pretty much the same spreadsheets, so victory was going to the machines that could create the trades as fast as possible.”
According to Blossom, with everybody looking at the same data and using similar analysis, then investment advantages go only to the fastest machines. That was, said Blossom, until the emergence of hedge funds, where suddenly small groups of people with unique insights could create opportunities that didn’t previously exist.
“All of a sudden, [traders] could create market advantages that automated trading was not able to create previously because people were looking at unique sources of information that clued them in as to when market trends were leaning a particular way – that began to bring a new sort of factor into financial markets,” said Blossom.
That new factor for the future, argues Blossom, is text analytics. With hedge funds, people armed with the same tools and unique insights from very specific pieces of data that were off the typical grid were able to succeed in seizing market opportunities that others couldn’t see. This is the model for the big data future, according to Blossom, where applying “grammar” and semantic rules to identify patterns in the big data/unstructured content provides success, not merely having the fastest machine.
“In text analytics,” says Blossom, “we’re moving from data analysis – just raw data points – to understanding more human motivations: focus and intent analysis.” This understanding, explains Blossom, is crucial for success in the financial markets of the future.
“Human language has more sophisticated triggers than the data in a spreadsheet – there’s a lot more that goes into financial decision-making based on the humanness of our understanding of the world than we’d like to let on. It may not mean that trading floors like the NY stock exchange may exist forever, but it does mean that we need to understand semantics forever.”
The net goal, explains Blossom, is to get ahead of the machines by coming up with things that people can eyeball and understand at a human level that allows the ability to react in human time and to use technology not simply to be faster than the other person’s machine, but to have more intelligent insights than the other person as quickly as possible.
Blossom explains that by digging deeper and understanding the complex subject-predicate-object analysis contained in the endless amount of unstructured data (sources such as Bloomberg messaging emails, Squawk Box transcripts, social media, CRM and so on) – by using the disparate sources of structured and unstructured data in a way that creates identifying these macro trends, traders can theoretically be acting weeks ahead of time to be able to take advantage of opportunities that would otherwise be missed.
In a sense, says Blossom, this takes the financial industry back to fundamental roots.
“Some of you may remember decades ago when Alan Greenspan used to go in for his testimony on rates with the Fed and everybody would be looking at his briefcase to see whether it was thin or fat and speculating as to [what that meant] regarding rate changes,” recalled Blossom. “In a sense, that’s what text analytics with semantic smarts can bring us to – bringing some human level into things that we have not necessarily been getting out of these data streams before.”