Cybersecurity expert Theresa Payton says don’t wait until you’re in the middle of a breach to formulate plans. Practice digital disasters. Develop a playbook so everybody knows their roles. Line up your external helpers. Devise your communication strategy. Then, if you’re breached, cooler heads will prevail.

In June 2015, the U.S. Office of Personnel Management — the federal agency that manages the government’s civilian workforce — reported that hackers had stolen personally identifiable information (PII) of 21.5 million individuals, including Social Security numbers from its background investigation databases.1 According to The Washington Post, the Chinese government was responsible for the OPM breach.2

“Organizations are doing a horrible job at cybersecurity and privacy protections,” says cybersecurity expert Theresa Payton. “The security company, RSA, released a Cyber Security Poverty Index that stated that 72% of large enterprises, which are the ones with the security budget and resources, are unprepared for all aspects of a data breach, including identifying the scope, recovery, and notification.”3

Payton, a former chief information officer during the second Bush Administration, was a keynote speaker at the 30th Annual ACFE Global Fraud Conference in June. “We’ll never be able to build security that will stop all bad things from happening because we’ll always have bad people in the world,” she says during a recent Fraud Magazine interview. “It would be a great day if cybercriminals hit a brick wall in trying to hack into a company and they said, ‘Wow, this is so hard. Maybe I should go be a good person now and bake pies for the sick and the elderly.’ They won’t do that. They’ll either move onto the next victim and hope they’re more vulnerable, or they’ll find another way to attack. The adversaries are engaging newer technologies such as artificial intelligence, machine learning and big data to step up their own evasion capabilities,” says Payton, the founder, president and CEO of a cybersecurity consulting firm.

“For example, we’re now seeing malware that’s designed to evade most of today’s detection techniques. We see attacks hiding in encrypted communications and traffic.

There’s also a new playbook that Russia perfected during the 2016 U.S. presidential election, which nation states and cybercriminal groups are using. That playbook is the evolution of hacking social sentiments and using misinformation campaigns. Russia used it to create public unrest around the globe and provoke arguments on both sides of issues. In the future, cybercriminals could use it to defame individuals, industries and organizations.

“Cybercriminals change tactics daily. Most of them are dynamic and evolve their tradecraft of tools, tactic, and procedures — TTPs — often to develop new attack methods and to attempt to remain undetected,” Payton says. “Additionally, new technologies introduced daily into the workplace add to the potential attack surface. Cybercriminals use an ‘all of the above’ strategy. They deploy social-engineering emails with poisoned links and attachments. They surf password databases and retry them on company emails and networks. They take advantage of software flaws to insert themselves into technology processes.”

The situation might be dire but not hopeless, Payton says. “We have to continue to up our game, too. Defenders must leverage emerging technologies to architect better and more security strategies while also removing more burden and friction on the user.”

FM: In many ways, we seem to be talking about the same cybersecurity problems that have existed for decades, such as user IDs and passwords, failure to patch systems, lack of reliable backups and more. But now in a much more sophisticated and complicated technology environment, what needs to change to make our cybersecurity more effective? What are some common denominators of successful, innovative cybersecurity programs?

Payton: We do talk about the same cybersecurity problems, and it’s maddening, isn’t it? We keep focusing on the user being the weakest link and that’s the absolute wrong way to design an effective cybersecurity program. The threat landscape evolves as we add new technologies that are incorporated into our transactions and how we interact with the internet but also as we find ways to stop the adversary.

A cybersecurity program should have the basics, such as benchmarking against the industry best practices provided by the National Institute of Standards and Technology [NIST], but it also needs a focus on these questions: 1) What are your top three most critical assets that if they were destroyed, held for ransom or leaked would cause major disruption to your business? 2) Who touches or accesses those assets and what safeguards are in place to make sure the assets are safe? 3) What creative solutions are in place that can outthink and outmaneuver the adversary? 4) Have we designed a multi-layered strategy that incorporates a wide variety of solutions that can stop or detect and recover from an issue.

The layers could be strict user-authorization controls for transactions outside the norm, user-access controls appropriate to data sensitivity, regular and emergency patching, conducting a red team — bringing in an independent team to challenge your applications — and application whitelisting — telling your network the list of what’s “trusted” — and behavioral-based analytics that look at logins, access, and data ingress and egress points.

FM: Can you give a brief list of mandatory responses when organizations find themselves in the middle of breaches?

Payton: When you’re in the middle of a breach, what you do will define your reputation for years to come. The best approach is to practice a digital disaster before you have a breach. You need to develop a playbook so that everyone knows their roles and will highlight any gaps that exist in your cybersecurity plans. Determine what role you want external helpers to play for incident response, forensics, legal advice and cybersecurity fixes. Hopefully, you’ll never need the playbook, but if you do, it should include your communication plan.

A communication plan needs to focus on when you need to disclose publicly to your employees, board, third-party vendors, regulators and — most importantly — your customers. Make sure you know how you’ll communicate. For example, for employees, will it be an internal web portal or a phone blast? For customers, will you do a notice on your website, social media or do direct mailers? Know who your friendly press outlets might be. Have the phone number handy for your local FBI cyber unit.

Here’s what your playbook should emphasize:

  1. Assemble: Now’s the time to pull together the trained incident response team. Incident response requires co-locating the team even if via video conferences. Discuss the disaster, review possible remedies, discuss how this group will get alerts and how the escalation process and approval process for ongoing internal and external communications will work.
  2. Transparency: Both internally and externally, communicate what you know, what you don’t know and when future updates will occur.
  3. Dig deep into details: Provide details when and where possible. The where and when might depend upon forensics and ongoing case work. Explain how you plan to avoid similar issues in the future.
  4. Feedback loop: Design and implement an incident response [IR] feedback loop. Tell customers how best to reach you — possibly via a toll-free number or social media. Walk them through the best way to be heard.

FM: Employees, of course, aren’t automatons. They can be thoughtful, emotional, studious, lazy, loyal, cunning, brilliant and dumb — sometimes all in one day. How do fraud examiners and cybersecurity personnel treat employees as multi-faceted humans when protecting organizations? You’ve said that many organizations’ cybersecurity briefings sound similar: 1) The sky is falling 2) Our data is at risk 3) Attacks are worsening and 4) users are still the problem. How do organizations get beyond these suppositions?

Payton: Have some fun with education. To most employees, cybersecurity training is so boring. They snooze and you lose! Focus on the digital assets and operations that matter most to the organization. Talk to your employees about what it would mean if those digital assets were under attack and stolen. Then talk about tips to safeguard those assets.

Consider instituting contests, such as “The person who forwards the most spam messages to our organization spam alert account wins a free lunch this month.” Awareness and education shouldn’t be one time a year and solely focused on compliance. Consider a curriculum that leverages many mediums throughout the year. Keep most of the messaging short and focused on one topic for easier retention.

Mediums you can use include delivering in person, video conferences, posters and internal memos. You can train them on: 1) your most important assets and how to safeguard them 2) how to protect yourself and your loved ones from online fraud (transferrable skills that will help protect the organization) 3) local and federal obligations if you have a breach 4) reminding everyone that the security team is a small and mighty team so security is everyone’s job 5) train, explain, test and repeat!

FM: What else can organizations do to train their employees?

Payton: Cybercriminals can hack your employees’ devices — whether they belong to the individuals or the organizations. All of them have open platforms so they can upload operating system updates and security patches. However, their devices are eminently hackable. Their latest apps can spell trouble for your organizations.

What are we doing wrong with these devices? We provide very arcane rules for employees to follow, such as having them devise 12-character passwords of non-repeating numbers or letters, etc. The complicated passwords actually feel designed to keep the owner of the account out because they can’t remember them, and only the bad guy with a password cracking tool can get in. We must work harder to design security systems that act more as enablers and safety nets, rather than by rules that become restrictive for people to gain access to do their jobs.

According to a recent Experian report, “Managing Insider Risk Through Training and Culture Report,” security and privacy training professionals stated that just over 65% of their organizations’ employees are still the “weakest link” in cybersecurity. I realized that wasn’t really true when I worked at the White House.

The pivotal moment for me was when I shifted the design of a security strategy. We knew we had to address the hearts and minds of the staff at 1600 Pennsylvania if we wanted to protect their privacy and security. After all, if solving cybersecurity and privacy issues were as simple as following security best practices, we’d all be safe. It's not that simple.

Two key questions came to me the first 90 days at the White House and I had to answer them or we would have had a major calamity: 1) Why, in spite of talented security teams and investments on security, do breaches still happen? 2) Why is it, that despite hours and hours of boring computer-based training and security campaigns, we still make mistakes and click on links?

We need a new line of thinking. It’s the one that kills boring computer-based training courses. It’s the path that bans buying the latest fancy security tool if you haven’t used innovation to combat the human element. The new path is driven by a core principle: “design for the human.”

I compare this line of thinking to installing childproof or safety items in your house for toddlers or pets. You still tell them “don’t touch this,” but just in case they do, you designed safety into your house for them in mind.

Design for your employees and for yourself. Just know they’ll use free Wi-Fi, they’ll recycle passwords, they’ll respond to emails that are tricking them into giving up information. They’ll break all the security rules because they aren’t security employees. By the way, I am going to let you in on a little secret: Security employees break the rules, too!

FM: Can you explain what you mean about conducting a “cyber walkabout”?

Payton: I often walk around a client’s building and ask employees simple questions such as, “How are we doing supporting you? Is there anything we ask you to do, in the name of security, that gets in the way of you doing your job?” Don’t try to fix things right away. Just listen to their answers. You’ll learn so much about what you issues the organization has.

FM: How do organizations bring in ethical hackers to test systems?

Payton: Before you bring in an ethical hacker, ask yourself, “If I were to ask someone to act like a cybercriminal, what’s my biggest fear of what would happen?” Pick a reputable firm that will ask you what you’re trying to accomplish. Discuss with them about your top three most-critical assets and develop an incident response playbook. Make sure you have very crisp rules of engagement and expected outcomes. It should also be a coaching and mentoring exercise for your team. If it’s going to be a “Wizard of Oz” exercise where you’re not allowed to peak behind the curtain, it’s not going to be the best use of your time.

FM: Our members — fraud examiners — work hard to prevent and deter fraud in all sectors. What’s the best advice you can give them as they fight daily in the trenches?

Payton: First off, remember that you picked a very noble profession! You’re helping fight fraud and reduce crime. Be a student of your job and keep up on the latest advancements that you can deploy to help improve your organization’s anti-fraud posture.

Your job is fascinating and interesting. Don’t forget to use storytelling to help your customers and clients know why you implement certain rules or controls and how these rules and controls protect them.

FM: Dr. Joseph T. Wells, CFE, CPA, began the ACFE in 1988 to preach the message of detection, deterrence and prevention of fraud. How do you see the association doing in its cybersecurity deterrence and prevention efforts? What can the ACFE do to further help its members in cybersecurity?

Payton: I love the work that the ACFE has done for decades now in the detection, deterrence and prevention of fraud. I also love seeing the ACFE focus on the intersection of fraud-fighting discussions with cybercrime-fighting discussions. It would be great to see the ACFE push the industry to look for just-in-time methods to push fraud indicators to the cybercrime-fighting side and vice versa. Additionally, how can we create new models for fighting fraud when interactions are now machine-to-machine and how can we more quickly detect a rogue machine committing fraudulent transactions? Actionable threat intelligence in real time is key to winning the cybercrime war.

FM: Should senior management of government agencies and private organizations be held accountable and suffer consequences when they don’t adequately secure PII?

Payton: Whether it’s the OPM breach or another department or agency, the database of information stolen to date from government organizations is staggering. As fraud fighters, ACFE members should be concerned about the stolen data elements that don’t really change and can be used within fraud schemes for several years to come. I can’t get a new Social Security number easily nor can I suddenly have a new residential history, mother’s maiden name, new date of birth or new handprints.

Following the OPM data breach, the government began a renewed flurry of activity to raise the bar on cybersecurity postures. As with anything of a massive undertaking, there are both progress improvements and areas not moving fast enough. The U.S. Government Accountability Office conducts a review regularly and you can see the scorecards of departments and agencies.

Obviously, more needs to be done. Bad things do happen, and although we want someone to be held accountable, I know that U.S. federal departments and agencies work hard to protect the data they collect. However, they’re outstaffed by cybercriminals and cyber operatives from nation states. When a governmental breach occurs, the affected department or agency should immediately release an after-action report and hold weekly oversight meetings to ensure they implement mitigating controls.

FM: Do you think governments should impose criminal penalties associated with data breaches when organizations are negligent in protecting sensitive or personal data? Should the U.S. Congress enact harsher penalties for data breaches involving personally identifiable information?

Payton: I’d like to see the U.S. enact a research and development tax credit for any company that spends money on cybersecurity. Of course, spending $1 on cybersecurity is $1 you can’t spend on PR, marketing, hiring, etc. to grow your business. But we’d make significant progress in our private sector cybersecurity posture if we incented businesses to invest. Most businesses still don’t think they’ll be targets, so disincentives are going to be less successful.

FM: The Center for Cyber Safety and Education predicts that by 2022, 1.8 million security positions could exist. How will we be able to satisfy the demand for all of these professionals that currently don’t seem to exist? How can government agencies afford to compete for cybersecurity and digital forensics talent with the private sector?

Payton: Recruiting at the White House was tantamount to our success. Finding colleagues who share your passion, your drive and your skills is tricky but not impossible. Chief information officers play a huge role in shaping the makeup of their team. I often tell other CIOs and c-suite execs that they need to stop chasing the same résumés and the same degrees from the same colleges, and stop looking for the same certificates. Many of the best tech employees don’t have traditional backgrounds. Consider re-training and re-tooling insatiable problem solvers. Hiring outside the mold is crucial to innovation and success.

We can also leverage artificial intelligence and machine learning to automate some of the entry level tasks to help supplement the teams that are stretched thin. I can’t wire a human to have that compass and to have that desire to passionately protect and defend. If you see that in somebody, you can train them in cybersecurity.

Whether you hire for the government or the private sector, consider tapping into our U.S. military service members who might be leaving the service and need a job. They often still want to fight the good fight. Go to the bases where they’re detaching from the U.S. military to discover how you can retrain them. Same thing with law enforcement. Whether they’ve spent a whole career or a couple of years, they might have the correct wiring.

FM: For years, cybercriminals had targeted financial institutions. Now they seemed to have moved on to health care. Why is that? How are health care organizations ramping up their prevention faculties?

Payton: Criminals go where the action is, where victims are easy to find and where there’s value. They focused heavily on the financial services industry for decades. As the financial service industry hardened their defenses, cybercriminals looked for new targets and hit pay dirt by targeting food, retail and health care.

When WannaCry hit, people heading into U.K. hospitals who were scheduled for surgery were told, “We can’t operate today because we’re in the middle of a ransomware attack.” This is where cybercrime gets real. A ransomware attack could endanger lives. Yet security protocols can’t stand in the way of patient care. Health care institutions once used old-school rules; you could develop your firewall and hide the data behind it. The devices that are part of the “internet of everything” [IoE] or the “internet of things” [IoT] have broken that rule. [See sidebar: “Three tips for fighting fraud and cybercrime at health care organizations,”.]

FM: What kinds of questions should our members be asking their organizations about “kill switches”?

Payton: Do you want to be aggressive or not? In emergencies you might need to flip the kill switch and turn off the data. It will disrupt operations, but the consequences are better than finding out that leaving it on creates a much bigger problem. You have to devise within your incident playbook how much you should turn off, what kind of functionality you want, under what circumstances would you flip the switch, who will flip it.

FM: What are some practical ways that employees can protect organizational security when they’re traveling for business or pleasure?

Payton: Ask for employees’ travel briefing when they go overseas so you understand the physical and digital threats. Update your devices, including operating systems, applications and browsers. Install your company VPN on all devices. Make sure they travel with a hot spot on their devices for connecting to the network so they don’t have to use free Wi-Fi. Tell them they can’t have any business documents, email systems and apps on their personal devices. Instruct them not to leave computers in locked cars, hotel room, etc. Nothing is truly locked. Instruct them to have the mindset that they’ll be targets.

FM: What are your thoughts about the “right to be forgotten” clause in the European Union’s General Data Protection Regulation?

Payton: The EU’s “right to be forgotten” sounds nice in theory, but it’s nearly impossible to implement because of the internet's ability to spread and otherwise disseminate information globally and instantly, and to store it indefinitely. The clause sets an interesting precedent, not just for its member countries but for citizens around the world. It’s too early to know what the long-term impacts of the EU's decision to enforce "right to be forgotten" with where technology companies will be. However, it's a safe bet the law will evolve and not disappear. There are concerns that giving you or organizations more control of their internet identity could lead to countries and firms censoring the internet. Free-speech advocates around the globe are concerned that the lack of court precedent and the gray areas of the EU law could lead to pressure for all tech companies to remove results across the globe plus delinking news stories and other information upon an individual's request.

A quick history lesson of how this law came about. According to a European Commission fact sheet, a Spanish citizen filed a complaint with Spain's Data Protection Agency and indicated that Google Spain and Google Inc. had violated his privacy rights by posting an auction notice that his home was repossessed. The matter was resolved years earlier but since "delete is never really delete" and "the internet never forgets," the personal data about his financial matters haunted his reputation online.

He requested that Google Spain and Google Inc. be required to remove the old news so it wouldn’t show up in search engine results. The Spanish court system reviewed the case and referred it to the European Union's Court of Justice.

According to the European Commission fact sheet, the EU court said in May 2014, that "individuals have the right - under certain conditions - to ask search engines to remove links with personal information about them. This applies where the information is inaccurate, inadequate, irrelevant or excessive for the purposes of the data processing. … A case-by-case assessment is needed considering the type of information in question, its sensitivity for the individual's private life and the interest of the public in having access to that information. The role the person requesting the deletion plays in public life might also be relevant."

In the U.S., implementing a federal law might be tempting, and I often get asked on the Hill if we should implement one. The challenge is that the ability to comply with the law will be complex and expensive. This could mean that the next startup will be crushed under compliance and therefore innovation and startups will die before they can get launched.

However, we do need a central place of advocacy and a form of a consumer privacy bill of rights. We have remedies to address issues, but it's a complex web of laws that apply to the internet. Technology changes society faster than the law can react, so U.S. laws relating to the internet will always lag behind.

The Better Business Bureau helps us with bad business experiences. We have the U.S. Federal Trade Commission and the U.S. Federal Communications Commission to assist us. Individuals need an advocacy group to appeal to, and for assistance in navigating online defamation, reputational risk, and an opportunity to scrub their online persona.

I feel that it’s impossible to put the EU law into practice as it’s written, but I’d love a way for anyone to truly hit that permanent erase button to wipe out all records of a user's internet-based activity. This could clear up lots of social, emotional, criminal and business concerns that affect many of us today.

FM: What are some of the best ways for organizations and organization to protect their IoT devices?

Payton: The IoT and the IoE are upon us. Businesses and the security industry were struggling to beat cybercriminals before these technologies were introduced, and this just adds more new points of presence to worry about. We love the convenience, but the data the new technology creates and the complexity it adds to the workplace is creating a domino effect of security and privacy issues.

This new technology doesn’t follow old-school rules. Now is the time to break all the security rules and try a different approach. The biggest risks aren’t so obvious. They hide in the least obvious places. Like in “smart” light bulbs installed for energy efficiency or sensors installed under the chairs at the workplace!

While the chatty IoT devices talk to each other while creating treasure troves of data every millisecond to save money, improve safety, service and well-being, that’s all new data that needs protection. And those chatty little sensors seem innocent enough. What harm could a seat or light sensor do? But, if you’re not careful in design, the IoT could be the new unlocked door spilling the keys to your secret stash of what you believe to be secured digital assets.

FM: After graduate school, what attracted you to the cybersecurity field?

Payton: I was fortunate to begin my career at Barnett Bank — now part of Bank of America — on the cutting edge of technology and customer delivery, which allowed us to also be on the cutting edge of the methods of fraudsters, money launderers and more. I then became a senior vice president at Wachovia Bank in Charlotte, North Carolina. After 9/11, U.S. agencies came knocking on the bank’s door asking for help finding terrorist financing that could be hiding among legitimate transactions. That was a life-changing moment for me. Having been raised in a military family, married to a Naval Academy graduate and being asked to help law enforcement with something so monumental was humbling. The on-the-job investigative experience helped formulate my desire to focus the rest of my career to protect nations, citizens and businesses, and to combat fraud, cyberterrorism and cybercrime. I realized that cybercriminals will use whatever means necessary to achieve their goals. And I learned how much help companies large and small need to defend against these criminals.

FM: In one of your presentations you tell the tale about how your White House security team used “Blackberry happy meals” — plastic baggies containing the devices and instructions on how to protect them plus candy and other goodies. Can you explain how this inexpensive method helped improve security and taught you about innovation and creativity?

Payton: What does a happy meal have in common with the White House? Turns out a lot. We realized that staff weren’t reporting missing BlackBerrys fast enough. We conducted an informal study and found they were afraid to first tell our department, the Office of the Chief Information Officer, which floored us. We realized the policy language didn’t spell out how we could help, and our briefings weren’t designed for busy White House humans but for government compliance. So, I asked the security team what we could do to design a brand-new process, training and communications. My team came up with a brilliant idea! They created a Blackberry happy meal — a clear bag that contained lots of fun things, such as White House pens and pencils, White House lanyards, presidential M&Ms, plus a wallet card with brief, memorable, key security points and a number to call at any time day or night. We gave the kits to a small pilot group at first. The approach was successful. The average time to report a missing BlackBerry went down over time. And those outside of the pilot group called us to get their own Happy Meals!

FM: What other cyber anecdotes can you tell about your time at The White House?

Payton: Those stories need to go to the grave with me! However, once we changed our mindset to design our security at the White House for the staff, we were more effective. We knew we had to address the complexity of our systems and technology. We also had to win over the hearts and minds of the staff if we wanted to protect their privacy and security. Our security protocols were meaningless if we made them too difficult for people to do their jobs. Of course, everything at the White House was considered critical and sensitive data, but we knew we couldn't protect every asset the same way. Just as the U.S. Secret Service has a clear focus — to physically protect the president and vice president — we followed that same principle in the CIO’s office. Our office’s responsibility was to protect and keep all assets safe. However, with limited time frames and resources, we always had a laser-beam focus on our closely guarded top two most critical assets, which I can’t name here!

FM: How did you become part of the CBS program “Hunted” team? What did you learn from that experience that you use in your firm’s work (or vice versa)?

Payton: Some of my amazing milestones in life are God moments. Something happened that led me down a path I didn’t know existed. “Hunted” was no different.

The show’s premise was that a bunch of everyday people play fugitives on the run. Our Command Center investigators worked with the boots on the ground, the “Hunters.” Team members had backgrounds from the U.S. Marshals, U.S. military, the White House, CIA, FBI and the National Security Agency. Even though it was a contest, we took the man-hunting seriously as if we were tracking child traffickers or deadly criminals. Each fugitive would have their lives compiled into a “target package” and we would mine the profiles of their families, friends, coworkers and their online presences for clues. Everyone, including organizations, leaves cyber footprints. How you interact with the internet determines how public and how large that footprint is.

FM: What’s your motivation in continuing in the cybersecurity field?

Payton: Righting the wrongs of the digital age with our skills and creativity. Protecting our nation and our allies. Protecting companies and helping them recover when bad things happen. Helping individuals get their lives back. Doing pro bono work to end child trafficking and exploitation. I am my brother’s keeper. This is my gift — to defend, protect and help victims seek justice. If not me, then who?

Thanks to ACFE Faculty member, Walter Manning, CFE, president of Techno-Crime Institute, for his input. – ed.

Dick Carozza, CFE, is editor-in-chief of Fraud Magazine. Contact him at dcarozza@ACFE.com.


1 Office of Personnel Management, Cybersecurity Resource Center

2The Daily 202: How the nature of cyberwar is changing,” by James Hohmann, The Washington Post, April 15

3 RSA Cybersecurity Poverty Index 2016