RSA 2019 Reflections: Why Trust is Paramount & What the Industry is Doing About It

RSA 2019 Reflections: Why Trust is Paramount & What the Industry is Doing About It

Last week we, along with tens of thousands of other security professionals, attended RSA Conference 2019. The theme of this year’s event was “Better” – how cybersecurity overall can be better and what organizations can do to better their own security outcomes.

 

But trust – the desire to capture it, the lack of it among customers and its evolving definition – remained a central theme across the many keynotes, exhibitor booths, vendor announcements and chatter among industry professionals.

 

Missed the event but want takeaways from the show floor? We’ve outlined the three biggest themes covered at RSA Conference:

 

Better together: Humans + Machines

Rohit Ghai, president of RSA Security, took the stage during a keynote depicting what life will look like in 2049. His image of this brave new world? “The consumer owns the data and has perfect information on her data and its copies and where it flows…Data is the primary asset flowing through supply and distribution chains, and the provenance and governance of data is an essential competency. “

 

He believes that humans and machines need to work together to properly enable the future trust landscape. “Stop waiting for humans or machines to get better at things they are terrible at,” he said. “Implement a security program with machines and humans working together. Humans asking questions. Machines hunting answers.”

 

Our take: This same model is used at Acuant to identify the fraud risk of IDs. And while tools like AI and machine learning are extremely efficient for discerning between real and fraudulent documents (processing millions of transactions at a rate unachievable by human experts), a human eye is still needed for accuracy.

 

IDs are physical documents that endure wear and tear or manufacturing discrepancies. Human researchers are needed to train algorithms to identify the fraud risk of an ID and automatically direct those that warrant further scrutiny to human reviewers. This mix of AI and humans cuts processing time from days to minutes and remains the most successful threat prevention model.

 

All signs point to federal privacy legislation

There’s long been talk about a potential U.S. privacy law, but with the advent of GDPR in Europe and the California Consumer Privacy Act of 2018, a bill finally seems inevitable. Experts at IAPP, Google, Microsoft and Twitter noted that the likelihood of a federal privacy law passing in the next year is higher than in years past. During the session Julie Brill, corporate vice president and deputy general counsel at Microsoft, posited the odds were at 30% “It’s no longer a question of if there will be a privacy bill, but what that bill would look like,” she said.

 

There are signs Congress will tackle privacy legislation again this year, and technology companies such as Google have a keen interest in shaping the federal privacy law. While there are several points of disagreement on what the law should cover, interest is high on both sides of the aisle in Congress to do something on the federal level to protect consumers.

 

Our take: The U.S. in particular is currently a hotbed of frustration over the mismanagement of personally identifiable information (PII) and lack of protection for digital identity. As all signs point to federal legislation outlining what is and is not permitted related to digital identity, lawmakers should focus on giving individuals control over their identity and outlining that they should be managing how and where their data gets shared.

 

Complexity creates security gaps

Multiple speakers identified the complexity of security products as an industry shortcoming.  Rob Westervelt, research director of security products at IDC, said the growing complexity of security solutions has led to gaps in coverage: “Organizations don’t fully understand the capabilities of the technology they have deployed.” This complexity also leads to misconfiguration and security policies that are not uniformly deployed across an enterprise’s IT footprint.

 

Complexity has also led to organizations not using 2FA effectively. Researchers L. Jean Camp and Sancharis Das from Indiana University-Bloomington detailed the challenges of 2FA adoption during their session. They posited that simply providing 2FA to users isn’t enough; it’s also critically important to communicate why and how to use the technology. Users need to be aware of the risks, and security vendors need to make it easier for users to understand.

 

Our take: We agree that complexity creates security gaps – especially in an area as multifaceted as identity verification. But we also recognize that establishing identity in the digital world is a fluid process, especially as questions continue to arise about the collection, processing and ownership of data. Identity verification is a balance between risk and friction. But with the creation of a “trust anchor” (such as an authenticated government issued ID) organizations can allow the user to take control of the verification process — deciding what parts of their identity, and data, a company can utilize to establish verification. This eliminates the complexity and friction associated with this process and breeds trust.

Interested in learning more about how to protect your business and establish trust? Read our whitepaper!

 

 

read the white paper

Questions?

Let's Talk Support