In the first part of this series, I defined cyber threat intelligence sharing and how it can benefit your organization, as well as between organizations. In Part 2, we will review the good and bad of threat intelligence-sharing programs and suggest some solutions.
When Does Intelligence Sharing Work?
Cyber threat intelligence-sharing communities have experienced varying degrees of success. For example:
- The Financial Services Information Sharing and Analysis Center (FS-ISAC) has been quite successful in fostering cyber threat intelligence sharing within the financial industry. FS-ISAC anonymizes submissions before verifying and analyzing the threat as well as any recommended solutions, then disseminates the intelligence to other member organizations. The community enlists many active committees and work groups that help to improve processes, capabilities, and joint knowledge.
- The U.S. Department of Homeland Security (DHS) Automated Indicator Sharing (AIS) program allows sharing between the U.S. government and private sector organizations. However, many view on-boarding into the AIS community as cumbersome. It requires, for example, dedicated hardware, facilities clearances and legal agreements. And while bi-directional intelligence sharing is intended, sharing thus far has been mostly one-way from government to the private sector.
- The Cyber Threat Alliance (CTA) is a non-profit organization that provides an intelligence-sharing platform for cyber threat intelligence. While several large companies helped found the CTA, the organization is still proving that it can grow and encourage the widespread adoption and use.
What makes intelligence sharing successful in some communities but not others?
In cyber threat intelligence-sharing communities, all members could benefit greatly from other members’ intelligence. Thus, a member in the community wants to receive intelligence from other members. However, when members find new intelligence, they must invest in sharing that intelligence with the community; they may have to validate the accuracy of the threat intelligence and scrub sensitive data, possibly reveal it’s been hacked, and/or provide competitors with an advantage they didn’t have before first seeing that threat.
Ultimately, members may choose to not invest in contributing to the community while they continue to seek intelligence from others. This is a classic “tragedy of the commons” problem, in which individuals’ self-interests keep them from acting according to the common good.
In some contexts, however, members do perceive that the value of the common good outweighs the cost of investing in sharing. For example, there are industries like the financial sector, where the threats of either greater regulatory control or the loss of customer confidence is extremely expensive (i.e. if banks aren’t perceived as safe, people won’t deposit their assets). Such issues provide great economic incentive for members to ensure the entire industry remains free from cyber threats, which may help explain why the FS-ISAC has enjoyed such great success.
Why Doesn’t Intelligence-Sharing Work?
There are some roadblocks to intelligence sharing. A 2009 article published in the Journal of the Association for Information Systems explored the impediments that hamper the creation of value in these efforts.
Figure 3 provides a visual summary of the impediments found and the activities they impact.
According to that research, the following five impediments could directly affect intelligence-sharing:
- Inaccessibility: Inability to obtain existing known data
- Source Identification: Not knowing where to obtain data
- Low Priority: Intelligence is not considered important enough to collect, process, and/or share
- Storage Media Misalignment: Intelligence storage method does not support desired intelligence activities
- Unwillingness: A refusal to transmit data to others
Fortunately, the first four impediments are relatively minor. Often, participants will resolve them through technological means for providing access, search, process automation or standards integration.
Unwillingness, however, is a more common impediment that arises from many familiar organization concerns, including trust, privacy, legal issues, and compensation for value creation. The choice to not share is an easy one that mitigates several risks that are outside of the organization’s control.
For example, many organizations don’t trust that other intelligence-sharing community members will respect their desires for attribution, anonymity, modification, or distribution restrictions. There are often concerns that content may unintentionally reveal information about the organization or infringe on others’ privacy, such as when a company is publicized as sharing personal information with governments.
Primary legal concerns include sharing intelligence that has international restrictions (such as the EU’s General Data Protection Regulation, GDPR) as well as mitigating potential for legal retaliation. Thankfully, thus far there have been no significant civil or criminal cases that punish cyber threat intelligence sharers.
Concerns about receiving reasonable compensation for value creation also drives unwillingness. An organization that creates cyber threat intelligence has invested its limited resources in identifying a threat, investigating it, and creating intelligence. Sharing that intelligence takes additional effort to vet, scrub, and disseminate. Other companies, possibly even competitors, then benefit from the investment made within the original organization, so it is often difficult to explain to board members and stockholders why intelligence should be shared. And while other sharing community members provide intelligence, the value equation will constantly fluctuate with no guarantee of receiving reasonable compensation for the value creation.
As a result, there will be some who will benefit in a disproportionate measure to others in any sharing community. Many organizations join sharing communities to receive intelligence, but fail to provide it. Others only share low-value or stale intelligence. The high-value sharers are often the founders or champions of a community, hoping to set an example for other members to follow. Finally, some communities require sharing quotas from each member, but this approach is often difficult to maintain. It would seem the interventions that have reliably mitigated the “tragedy of the commons” problem faced by cyber threat intelligence-sharing communities are regulatory pressure, community membership requirements, and customer requirements for industry-wide security.
Potential Directions for Improvement
Since unwillingness primarily causes a behavioral impediment to intelligence sharing, we will focus on approaches designed to help alleviate the concerns of trust, privacy, legal issues, and compensation for value creation. An argument could be made that the “low priority” impediment deserves attention as well; however, overcoming unwillingness should promote a cultural shift that drives more organizations to better-prioritize intelligence sharing.
Legal and Privacy
For legal and privacy issues, the 2015 U.S. Cybersecurity Information Sharing Act (CISA), represents a step in the right direction, but it falls short. CISA gives organizations several legal and privacy protections as well as guidelines for the treatment of personally identifiable information (PII). More specifically, it protects private sector entities from liability for sharing or receiving cyber threat indicators. While it does not require intelligence sharing, it sets forth a framework for federal agencies, so they can receive threat intelligence from organizations. Finally, the act includes provisions to prevent the sharing of PII that is not relevant to cyber security. Organizations should certainly heed CISA’s guidelines and enjoy its assurances of protection against liability for sharing, but most probably desire additional legal and privacy control.
Data-Centric Security is another approach to addressing legal and privacy concerns. It allows organizations to control shared intelligence even after it has left the boundaries of its own network infrastructures. It focuses on securing the data itself rather than securing the networks, servers, and applications. Some approaches to data-centric security provide post-distribution access controls as well as the ability to audit provenance, requests, access, and denials, allowing organizations to manage use of its intelligence even after it has been shared. This approach keeps control in the hands of the intelligence creator and, when compared to regulatory assurances, is more likely to mitigate organizations’ legal and privacy concerns.
Compensation for Value Creation
Intelligence-sharing communities essentially serve as a forum for value exchange. Many of today’s communities are structured to allow voluntary contributions from which other members extract value. It would seem altruism-driven communities work better under this model than those driven by protecting collective interests such as industry regulation or customer security.
Creating mechanisms to monitor both transfers and use of intelligence in sharing communities is a prerequisite to building alternative value creation compensation structures. Fortunately, Data-Centric Security provides a capability to monitor intelligence transfer and use; however, it falls short of providing a mechanism for compensation.
In the emerging fields of cryptocurrency and blockchain technology, Smart Contracts provide a self-executing and self-enforcing way to transfer value with low transaction costs. Integrating Smart Contracts with Data-Centric Security effectively establishes a way to automatically compensate intelligence creators when their contributions are leveraged by other community members.
Fostering trust is essential in intelligence-sharing communities. If other community members can’t be trusted to manage intelligence according to the creator’s desires, sharing won’t occur. Today, it’s possible to build communities with embedded social reputation mechanisms, such as feedback ratings pioneered by eBay. Unfortunately, few intelligence-sharing communities have integrated these into their exchanges. In the past, it has been difficult to directly tie the use of intelligence to feedback. However, the use of Data-Centric Security and Smart Contracts presents an excellent opportunity to integrate reputation ratings into sharing community workflows for the transfer and use of intelligence.
Some communities may contain subsets of members with inherently insurmountable trust conflicts, so designing access controls that allow intelligence creators to define their own trusted audience subsets is crucial. Without such granular trust management capabilities, sharing in a community means sharing to all members; thus, all members must be trusted for the optimal sharing of intelligence.
Organizations can reap benefits from preparing their cyber threat management processes for intelligence sharing even before integrating into sharing communities. Fostering automation, scalability, and continuous-improvement feedback loops that drive organization learning play an essential role in preparing to integrate into intelligence-sharing efforts.
Technologies such as Data-Centric Security, Smart Contracts, and reputation ratings hold promise for overcoming many cyber threat intelligence-sharing communities’ challenges. So do community member incentives which address key behavioral influencers related to legal, private, trust, and compensation for value creation.
We believe that changes in organization capacity for integration and automated scaling – in addition to improvements in community engagement – will encourage organizations to successfully embrace a cultural shift towards more cyber threat intelligence sharing. As a result, we will collectively influence the economics of cyber threat intelligence sharing to “stack the deck” in favor of defenders, rather than attackers.