Mar 14, 2018 | 2 min read

Podcast #3: Lessons and Insights From 25 Years of IT Security - by David Bauer

 

David Bauer has been working professionally with IT security since the late 1980s. As a pioneer in the field he helped bring Morgan Stanley’s first website online and was instrumental in Merrill Lynch being the first major bank to outsource major security services. Our conversation explored his history as one of the first Chief Information Security Officers on Wall Street, from his experience at Bell Labs through Wall Street and his current advisory work. He discusses the role of regulations helping companies improve their security, and shares valuable insights to help small and medium size businesses.

Resources – NY Times Tech Blog

Corporate security overview -  New York State DFS Cyber Insurance Report

 orange-line.png

Momenta Edge Podcast

Our podcast features expert interviews with leading practitioners and thinkers across Connected Industry and the technology landscape. 

When it comes to tech, the news cycle moves fast as we are faced with new issues, research, and developments on a daily basis. Many people struggle to find time to read the news, and here at Momenta, we're no different. We're lucky at Momenta as we interact every day with thought leaders within the different IIoT verticals. Thus, we've decided to produce our own podcast  that explores IIoT and Connected Industry with input from the news makers.

 

 orange-line.pngWe'll notify you weekly about new podcast episodes, upcoming guests, and news. You can subscribe to the podcast and if you'd like to be considered to appear on the podcast contact us.

 

View Transcript

Hello everybody, this is Ed Maguire, Insights Partner at Momenta Partners with our Momenta Podcast, where we interview some of the most interesting and thoughtful people in technology, connected industry, and business. 

Today my guest is David Bauer who is currently a managing partner at Sandhill Lease, but I’ve known Dave for a number of years, and when we originally met he was the Chief Security Officer at Merrill Lynch, Dave brought a rich array of insights and experience to the role and accomplished an enormous amount at a large organization, with some rather daunting security challenges. Security for anybody who’s in any business, whether its Internet of Things, whether its industrials, whether its financial services and banking, is almost always top of mind, and David is one of my favorite people, and most understandable and articulate in talking about the problems people face, and a lot of the challenges that are overcome, and then really creative ways of thinking about security as well. So, with that, I’d like to welcome you to the podcast Dave. 

Thank you very much Ed, it’s nice to be here. 

 

I’d just like to start by having you share a bit of your background, what brought you to security, and share a little bit of your career arc which has brought you to where you are today.  

This morning before the webinar I was thinking back through my career, I started working in information security as long ago as 1987, so its almost been 31 years. At that time, I took a position doing research in intrusion detection at Bell Communications Research, which was the RND arm for all their regional Bell operating companies, the local phone companies back then. Most of the literature which you could find about information security was some industry bulletins in a lot of research, and not long after that the regional doctrine companies had a significant number of public intrusions into the network, some of which are documented in books like, ‘The Watchmen’, or ‘The Masters of Destruction’, which I was involved in, in those incidents.  

I learned a lot back then, the nature of computer hackers and those who want to break those systems, that part hasn’t really changed in 31-years, they’re still an enemy and they’re still a target, just the enemies have changed, and the targets have changed. A seminal incident during that time at Bell Communications Research was, the Morris worm of 1988, the first well-publicized, self-replicating piece of software which found its way at that time into a large number of unit systems which are connected to the Internet; including Bell Communications Research, phone companies, and a lot of research institutions, and so-on. That was probably one of the most interesting events of all of computer security, because it raised a number of interesting questions, and many of those we grapple with today. That worm was self-replicated because of a flaw in some software that ran the unit systems. The questions it raised were the following… 

  • Why didn’t the vendors tell us about these flaws? Because they knew about them, or did they know about them, and if they did then why didn’t they tell us about this? 
  • Why didn’t we have any way of being alerted quickly with what the issue was, and how do we fix our systems? 
  • Why didn’t we have anybody responsible for making sure our system’s up to date? The patching problem we grapple with today?  
  • Why isn’t there any government agencies providing oversight and assistance to industries.  
  • Why don’t we have standards for our systems, training for individuals?  
  • Why is it that everybody has all their computers just connected to the Internet, without any security boundaries between? 

All of this was brought to light in one incidence, and because this tremendous effort, and much of this we’re seeing today either has been addressed or still remains a challenge. From my own experience at Bell Communications Research, the regional operations of companies asked us to go to technology vendors and have them come up with a statement about how they dealt with security flaws and come up with standards for them on how they would interconnect their systems together to be more secure. In the work with the government I placed one of my staff at NIST for a year to create some of the first early standards which were published for computer security. So, it was a very interesting time, and we see the effects now, and I’m sure you have bunch of questions about that interesting period of time! 

 

Its amazing that just one incident ended up having such major repercussions, but it wasn’t the last either, right? 

It wasn’t the first, but it was the first well publicized, and it certainly wasn’t the last. There’s been many incidents which have occurred since then with varying degrees of impact. But then after that I took a job as one of the first Chief Information Security Officers on Wall Street at Morgan Stanley, that was sobering because I went from a research and development, advisory kind of position, which had a huge impact I think on the industry too; being responsible for the security, the technology security of a major financial services company. 

What struck me most about that position at the time in 1994 was, like most financial institutions we had thousands of distributed computers, mostly Sun at that time. The IBM mainframe had very good security, because that had been built in the mainframe operating systems for years, but there were very few tools available for the rest of the environment. In addition to security I led the team which launched Morgan Stanley’s very first Internet website, and we were quite concerned about the security of our website, it’s not like the websites you see today with online trading and all of that, but still, if we had a security incident, that would have been a reflection not just on Morgan Stanley, but I think on the industry as a whole using tools like the Internet to disseminate information. This is in 1995, there were very few corporate websites available.  

I was explaining to somebody just maybe a week ago saying, ‘You don’t think about it, but of course there was a time when companies didn’t have websites’, and now of course you wouldn’t think that, but there was a time, and somebody had to launch the first ones. We had to essentially build our own firewalls from Intel machines and strip down operating systems, and wrote our own firewall rules, and we wrote our own protected interpreters, and hardened our web servers all on our own, because you couldn’t buy that stuff back then, but we still cared a lot about security. What was interesting about that is, the questions did come from executive management who had to be educated about the Internet, and what the business was trying to do with the launch of the first website. 

But the questions did come from the executive management business about, does this represent a risk to the firm, and how do we protect against that? I found myself in discussions with senior business people at Morgan Stanley, by explaining to them what the risks were, and what we were doing to protect the company. And so that insight was very interesting, but even then, people perceived that inter-connectivity would bring additional risk, and asked questions about how that would be protected.  

I was in that position for several years. Later I moved to Merrill Lynch in 2001 as the Chief Information Security and Privacy Officer. The interesting thing about that role was, in addition to being the Chief Information Security Officer, and all of what it brought with it, was I also took on the role of Privacy Officer, which today is typically found in the General Counsel’s office, not associated with Chief Information Security Officers. At that time my team developed our first privacy policy, notice of privacy practices that we had on the website, a privacy incident response play, and a way of thinking about privacy as a characteristic which a customer would want us to provide for them; privacy of their information, ability to tell them what we were doing with their information, and having a way to think about privacy. 

That was the new thing as well, most of my peers were not Privacy Officers, or perhaps companies hadn’t thought about it, but basically Merrill Lynch was a bit ahead there. I was in the role for several years, some notable things which got attention during that time is, I was the first Wall Street Company to outsource an aspect of security beyond safe guards. I hired Verisign to manage and monitor our firewalls and all of their intrusion detection systems. The thought process I had was Verisign ran Global DNS, and they had a better view on all the traffic going on, on the Internet, than anybody in the world. I felt that if they were monitoring our external defences in managing them, then combine that with their view of what other activities they’re seeing on the Internet, they had some data to show for various malware attacks and others, so they were able to see that before anybody else, because they can see the DNS traffic and the Internet traffic. 

It was so novel at the time it made the front page of the Wall Street Journal, and some of my peers called up and said, ‘How could you do that? You need to keep in-house’, I go, ‘I have no visibility in-house, all the visibility about what’s going on is outside of me, and I need to have that intelligence to help me make better security decisions, and better protect my network’. That was pretty common, many companies hire security operation centers, and other external parties to monitor for themselves, because they can’t possibly see everything going on, or really anything going on. They need to have that kind of data and intelligence to properly protect themselves. That was quite interesting. And now, I consult typically with early stage or start-up companies to help them achieve their security role. So, that’s the history of my career. 

 

It’s been pretty exciting, and you and I got to know each other when you were involved in doing that first outsourcing of security at Merrill. I was struck by one of the comments you made about putting the first Morgan Stanley website online, and the fact you had to speak with executives, and in order to get them to trust that it would be safe to put this information online. That’s very similar to a lot of the conversations we’re having today with companies that are looking to connect their physical assets and be able to have visibility into physical processes they never had, which of course creates a lot of data and exposes a lot of data. What were some of the objections and ways you were able to convince executives to have trust in these systems that they had not seen before? 

That’s a good question, what’s really interesting about that particular case was, the desire to use the Internet as a medium of communication was purely a business-driven decision there. There was Stephen Roache who you will long remember from Equity Research, wanted to publish Equity Research reports on the Internet, he said, this is the way of the future. As you know, that’s how its done now, very little paper gets published, so we had a business demand. The executives wanted to understand what was the reach of the Internet, how could this be presented, and then what were the risks of doing that? They were interesting business risks like, ‘How do we control access if we wanted to?’ Some of the equity research was distributed to the public, so there were questions about the risk of doing that perhaps deluding the brand. Then there were questions of, ‘What if it was compromised, what if our website was defaced or hacked in some way? ‘How could we trust that we would be protected?’ So, the way we convinced them was going through some detail on the kinds of threats, very similar to what you do now, ‘Here are the kinds of threats that occur’… Someone could change the data. Someone could upload a program. Somebody could get through our defences and find their way inside the core of Morgan Stanley.  

What one would do now is, you would outline, ‘Here are the threats’, ‘Here are the counter measures we would take’, Here’s the care we would take to manage and monitor this system over the course of time’, and how we would be alerted if there were incidents, and things like that. So, we built the multi-layer defence, but explained it in terms of the kinds of incidents that occur, so they could understand it, because they didn’t understand the concept of a firewall, or file monitoring, that’s a technical solution to a problem.  

We talked about the risks that were prevalent, and what we were doing to mitigate them. It was a learning experience being able to explain in a number of ways to non-technical audiences what the risks were. Of course, we also had to educate on what the Internet was, what interconnectivity meant, people didn’t really use the Internet at that point for commerce, or access to bank accounts, or to buy things, so that in itself was an unknown quantity about what exactly what it was, why people would use it, why this was an effective mechanism, and what it meant to be interconnected. The interesting thing is, of course Morgan Stanley had an Internet connection for years, but that was a technical choice so that technologists could get software, read technical journals and things like that. It was not seen so much as a business tool, so this is the first time it was brought up as a business tool, so there was an explanation about what that was like. 

I remember at a conference Mary Meeker invited me to, where I talked about security and the Internet to a bunch of investors, probably in 1986, and a lot of those same question were, ‘Why is this going to make money? Where should we invest in?’ I talked about security and some of the future things we would see as both challenges as well as opportunities that the Internet was bringing. 

 

Its amazing how so many of these same issues and questions just recur again and again. How about the concept of trust? It’s a recurring scene that in order to go on line, or in order to adopt a new technology, there needs to be a level of trust. I guess what I’m hearing from you is, you’re making an argument around 

Yes, in a couple of areas. So, first the concept of trust, interestingly enough the first well-known published security criteria was called ‘Trusted computer security valuation criteria’, there word Trust was the first word. Secondly, Ken Thompson gave one of the best talks ever on receiving the highest ACM award, and the title of his paper which you can find and download, is called ‘Reflections on Trusting Trust’. At that time, it was important to use the word ‘trust’, because you had to make a determination about could I trust this component, could I trust the implementation that I supplemented, and how would I evaluate the trustworthiness?  

I think we could go back to those principles some more, and think about various components we have, and not define it as, ‘I’ve implemented this set of technology’, or, ‘these components, or even ‘My staff have certifications’, or, ‘I’ve been audited and my audit is okay’, but more fundamentally is what are all my components, what is the business processes or the business I’m operating in my environment, and have I defined the trust levels, and the trust between the different components that I feel comfortable engaging in that business; with the kind of monitoring which lets me know whether that trust is maintained on a day-to-day basis? Because one of the most interesting things about the world of security is that trust is very dynamic, and so a configuration set up in a method of protection that I’ve implemented today may not be trustworthy tomorrow for a variety of reasons. I have to be vigilant and keep an eye on it, and make sure the trust is maintained. 

 

I thought that’s one of the most interesting comments you made to me about this idea of risk being a dynamic challenge, and that’s something of course which makes security really interesting, but it also makes it hugely difficult to manage, and of course as things get more and more connected that ends up being a big challenge. 

I’d like to pivot a bit to the role of governance, privacy, and regulations, and you’d mentioned when you were at Merrill you had the role of Chief Privacy Officer, and this has largely been moved into a corporate counsel role, but we have legislation, GDPR is coming into effect later this year in Europe; how have you worked with governance and privacy rules in the context of trying to match business demands, and business agility with the challenges of regulatory strictures or constructs that may not necessarily be designed to promote the most agile business processes, or the most flexibility in the business. And then of course you have to deal with budgets which also may not be that forgiving for all of the requirements that are placed on you. 

Yes, I’ve worked in a lot of aspects of technology, there’s never enough money for everything you want to do, whether its security or privacy, or building a trading system, so it’s all out with your point. I believe the privacy regulations/legislations are doing quite a bit to help people think about security and let me tell you why; you’re probably thinking, ‘That’s an odd statement’, but I think its true. So, what’s the essential element of privacy? The essential element of privacy is putting into the hands or at least the wishes of the individuals, which could be business, but most privacy regulations are targeted for information about individuals; what privacy legislation does is, it requires companies to follow the wishes of the individuals of whom the data is about, for the use of that data, and that’s a very powerful statement 

So, what it means is, the corporation can’t make all the decisions about the uses of the data. One, is the individuals wishes need to be granted, and two, the corporation needs to protect that data as the custodian, they’re bound to protect it. And so, in all the privacy legislation, when you boil it down it’s all very similar, there’s the right of the person to specify how the information will be used, the right to see it, the right to modify it if its incorrect, the right to be notified if its been breached in a way, the obligation on the custodian to protect it. GDPR looks like that, HIPAA privacy rule and breach of notification rules look like that, and GLPA in some pieces look like that. If you look across the world they all look kind of the same way. 

What’s great about that is, as a corporation when you have these regulations to follow, you have some obligation to protect the status, a bunch of security things you need to do. They need to follow the will of the owner of that data, at least the provider of the data, so I need to build constructs to do that to collect their wishes, to follow it. But you can apply that to corporate data, if you follow the same theme, you said, ‘Hey your business unit that owns this data, I need you to tell me your wishes for how this data is to be used, and I’m going to protect it, as custodian I’m going to follow your wishes’, it’s just another set of requirements, people think about the ownership of data, whether its business data or private data, as ‘I want to provide my rules and my wishes for the data, and you custodians have to implement my wishes, and protect the data. You have to tell me when you don’t do it very well, and there might be penalties on you if you don’t, particularly if you’re very negligent’.  

HIPPA for example has escalating penalties if the custodian of the health data is found to be negligent in their protection, versus just incompetence, but actually negligent, they knew there was a problem and didn’t do anything about it. I believe that’s a hugely important aspect, and I applaud governments who are taking strong action around the protection of private data, and I’m concerned when governments begin to not put the rights of the will of the people whose data is about them, as the controlling aspect around it, because I believe that one erodes trust. It erodes the good foundations which those controls bring, and the responsibilities to protect it, to bring in general across the entire population of the data that the company stores. 

 

It really is essential to creating a foundation of trustworthiness between organisations, public organizations, individuals, and companies. 

I’ve seen some interest reactions. For example, I’m including now in contracts in companies that I’m advising indemnity clauses, modelled after HIPPA indemnity clauses, in non-healthcare constructs. So, for example, I’m advising some financial services companies who hold data about their clients, that data’s being provided to a third-party for some useful business reason. I’m now including in those contracts indemnity clauses for things like breach of notification, credit monitoring, any other damages that might occur because that third party has breached. That’s very new, no one ever thought about that before, certainly 10-years ago no one really thought about, ‘Hey, if I’m going to have a third party and they’re going to be doing something with data that I’m entrusted to hold it, then they need to take a lot of special care and have legal indemnity if they don’t. So, essentially what I’m doing is, I’m extending out to those other organizations the responsibilities to protect the data, as I have responsibility to protect it from my customers. I think we’ll see more of that. 

 

Interesting. We’ve got all of these new sources of data that are going to be connected. In the next several years we’ll have automobile data, with autonomous vehicles, you have all this data that’s been collected by smart cities and smart buildings, how do you see the security challenges evolving around that, but also just building on the prior comments on data privacy and governance, are there frameworks that stakeholders in systems, whether they be governments or businesses are going to need to think differently about? 

I do, there’s a couple of aspects to that. One is, HIPPA has actually got a pretty good framework for the protection of health information and how it needs to be protected, and then how it can be de-identified for use in say analytics, because when you collect all this data there’s only two things that you do with it; one is, the company can’t be a highly digital company without having an extreme amount of personalization, that’s what people want, I’m connected to my retailer, I want the retailer to know a lot about me, so it knows my size, my shipping preferences, and the things I’ve bought, because it’s great when it offers me things that are tailored to my choices. It has a lot of personal information about me in order to do that, but I want to be able to know how the information is being used, I want to be able to set my privacy preferences for it, and the HIPPA regulations are quite good at specifying obligations there. 

The second piece is, how the data might be used, so you can identify it in some way and then companies use it to create general rules about people in my social economic class, that can eventually be funnelled back into better personalization, that’s okay too, it just needs to be done anyway. But interestingly enough I think we see a broad spectrum of companies and how they think about that, I see it when for example I’ll talk to various information technology suppliers that some of the companies I advise are engaging, and some are quite good; you talk to them, they understand the privacy needs and the deidentification and security needs, and others oddly enough don’t have a clue, they’d be like, ‘Oh yeah, all the data you give us, we give to whoever wants it in our company. Yeah, we’ve kind of thought about security, but we have all of our software over at Amazon and they’re secure, so that’s all we need to do’. That’s a completely clueless approach to both privacy and security. 

Others are at the opposite end of the spectrum, and what’s a surprise to me there is, with all the publicity and all the incidents, and all about how people want to have control, they want to use these services, but they definitely want them to be secured and have their preferences about the use of data respected. How divergent the different technology in similar suppliers are about their attitude towards that, and it’s like a bell curve, the bunch in the middle is somewhere really good, and then there’s a bunch down at the bottom that haven’t read the papers or something, haven’t quite figured out why they have the attitude they do. And so, in the trust concept that we’ve been talking about, there’s those guys at the lower end of the bell curve who are not trustworthy.  

They’re not trustworthy because they don’t understand the ramifications of the service they’re providing, the data that they’re stewarding, and the implications on how that data might be used, either by their staff of if they were compromised. They have others that are in the other end of the bell curve which are quite trustworthy given their understanding of the issues, how they’re thinking about it, and what they’re doing to both protect the data, and use it in a way the customers wish it to be used. 

 

If you’re a company looking to embark on an expansion of the devices and the systems that you have which are connected; say you have a fleet of truck generating data via GPS, or doing cold chain, tracking shipping, cargo and logistics, and you’re talking to different providers, what would be some of the questions, or are there ways if you’re running a warehouse and you don’t know anything about this stuff, what would be some of the ways to think about how to even ask the right questions, to figure out if they’re on the right side of the correct side of the curve? 

A couple of aspects, internally they should have a very clear internal policy for the use of the data, and they should make that clear across the company, and for however the data, or whomever the data is being collected, they should make that very clear. Often it might be an employment contract and people can sense you’re a driver and you’re going to be monitored for speed, distance and the rest of that kind of stuff. It’s clear, you might not like it but it’s clear. Just like working in financial services it’s clear, you have to be finger-printed, and you have to have a background checks, if you don’t like it well, then you don’t get the job, but you know it’s very clear what you need to do, I think clearly very important. 

Two, when you go to your service providers, those who are collecting data, if you ask a couple of simple questions, you can say, ‘Are you collecting data about individuals? That’s confidential’, ‘Do you have a SOC 2, type II, security and confidentiality trust criteria?’ If they look at you with a blank stare, that would be a bad sign. If they say, ‘No, we don’t have it yet, but that’s target, we understand what those are’, that’s a good sign. If they say, ‘Yes, we do’, that’s a very good sign. The SOC criteria, although not the most comprehensive criteria, at least gives you an indication that the company you’re working with has thought about the kinds of security and confidentiality controls that they need to have on their data and have thought about having a third-party adaptation to the controls which they’ve implemented.  

They’re not overly onerous, but they’re not simple either. Whereas it’s the simple way to gauge whether that service provider’s even thought about or has some expertise in the stewardship of protecting confidential information and securing it correctly. 

 

I’d love to just go back a little to your experience and see if there were any incidents. Obviously, what you’re trying to prevent with a security strategy is, the bad things that happen. But I wonder if there were any incidents or occurrences which stand out in your mind as really when the proverbial birds hit the fan!  

Just thinking of the recent past, the Spectre and Meltdown incidents that are just weeks old now, I think are quite interesting, and they’re quite interesting for two reasons. One is, computers are just made up of layers of software, and even on top of a secure chip, a CPN chip from Intel, or AMD, or take your pick, the software that needs to run on those and they’re optimised to do things in a certain way. What those incidents tell you are two things, one is that when the world is based on a small number of manufacturers, and a small number of operating systems, that there are going to be, and will always be the potential for security flaws, they’re going to be very-very-very widespread, they can affect 90 percent or more of the computers in the world. There are always going to be people who are going to be looking to find those flaws, and a company needs to have a good way to react to that.  

What struck me as quite good about that, which was different from the past is, the process kind of worked on it, the researchers signed the clause, they worked with the major vendors ahead of time, so Amazon, Microsoft, Google, major cloud providers, major operating system providers within a very short amount of time, or even before, had the ability to fix the problem. That process I thought worked quite well. Then you contrast that to something completely opposite where the WikiLeaks by Snowden on the collection of security flaws at the end of the day had been collecting over the years, only some of which they had disclosed back to the vendor.  

So, what that shows you is, 1) You can’t keep a secret forever. 2) Although in terms of mass of security government has to make certain decisions. Keeping security flaws, a secret, or hoping they’ll stay a secret, so they can be exploited only for say intelligence purposes, but not to affect the everyday commercing people, is a very slippery slope to be on. We need to be very vigilant about that as a country and a company, because if it has not worked correctly then it kind of puts the whole world at risk, and the damage of WannaCry which was based on the discovery of some of these flaws, was quite significant, particularly in the healthcare industry where hospitals couldn’t take patients in, at least do the paperwork… 

 

They even had some trains that were shut down, so public infrastructure was impacted by this. 

It was. I found those instances to be very telling in terms of a way that these things can be discovered and protected, and general at large the public protected was another way where that wasn’t. I think some thought around the policies there need to evolve, especially given the interconnections that are growing every day.

 

Absolutely, one of the interesting incidents I just wanted to ask you about was the Mirai Botnet which was really nothing more than just a nuisance, but this idea that you have all these security cameras, EDRs and printers that could be compromised because the manufacturers didn’t embed an easy way to change the passwords and have left these connected devices exposed. I’d like to get to the point of how do you deal with that, if you’re just a regular person or running say a small business, and more and more products become connected, of course, what should people do about this stuff? 

Good question. First of all, an interesting statistic is something like 2/3rds, 66% to 70% of all security incidents affect small and medium businesses, on a total number basis.  So, it’s the right target, and one of the reasons it’s the right target is exactly what you said, is the small to medium business may not have the wherewithal either in expertise, or time, or perhaps even money to understand all the security issues they face and come up with a program to deal with that.  

I think the answer to that is the following; one is, that risk is only going to increase, every SND is looking to automate, they hold data about their customers, about their business. They’re putting out more devices as we talked about, not just printer but cameras and other kinds of automation all of which are a target item. I would encourage them to think about the basic principles of security, just like they probably have layers of physical security, they’ve got a front-door lock and a lock in maybe the manager’s office, and a safe, they have to think about their technical security the same way, these large number of perimeter devices which need to be isolated from their ordering, tracking, and business systems, which need to be isolated from their core data. Just thinking that basic layered approach but for their technology, and then engaging with some local IT consultants who have some credentials to help them implement that, I think would go a very long way to the preventatives.  

The vendors of these systems need to think more broadly about the protections they’ve built into them, I think it’s bad practice if they create a device and there’s a static password in it, or a static key they’re going to put on 100,000 devices they’re going to deploy all over the world and think no-one is going to discover that, because they are. They’re discovering all kinds of things, not just in devices of the business they’re supplying, but scary enough in the devices they’re buying for the children, talking dolls and things like that which are Internet connected, that have a speaker on all the time with poor security, without a privacy practice, without any statement from the vendor about what they’ve done to implement it. Consumers should stay away from those if they can’t read something on the package about how the information is protected, how the device is protected. 

So, to me it’s almost like ingredients, the FDA makes the company put the ingredients on the list before you can buy food, well electronic devices should have a statement of security on them that they’re going to be connected, so you can read it, and you can say, ‘Okay, this things connected, they’ve got some kind of protection on this, and they’ve got a way to update it when it’s time to deploy’. Minimum criteria which should be on a label. 

 

That makes so much sense given all these connected devices. We’re coming up to the end of our allotted time here, but I wanted to just ask if you could share any resources when somebody asks you, ‘How do I get a little bit smarter about how to secure my business, and how to secure my life?’ Are there any resources you could point people to? 

The Times Tech Blog, and I’m sure all the major newspapers have this, I read the New York Times, it has the blog on technology which very frequently has very good advice on how to protect home technology and think about the kind of devices which are being brought into the home, or the technology, and how to think about security. It’s very readable, with links to other blogs and other resources, so, for the consumer, that’s a great place to go. I’m not receiving any compensation from The Times for that, I just think it’s very readable! 

I’m going to put the weirdest recommendation out that you probably have heard in a long time, if I’m a company, especially if I’m an SMBE, and I want to understand what are the elements of a security program that I should be thinking about, the essential elements; the best resource I’ve seen in a long time is security criteria by the New York State Department of Financial Services, it’s called the NYDFS, its 500 criteria… 

 

We’ll put a link in the show notes for sure. 

You can put a link, NYDFS Cyber Insurance Report. It’s probably 15 pages, extremely readable, its like the best government publication I’ve ever read that just lays out, ‘Here’s things you should do, put somebody in charge, write some policies, monitor it, worry about your vendors’. Very readable, it doesn’t tell you what to do, it’s not prescriptive, but you can go, ‘But I don’t know what to do or the set of things I should be thinking about’, and go read that. A consultant can charge you $100-grand to tell you the same thing, you can just go get this publication, it will tell you. It was written to provide a criteria for companies doing financial services work in New York State, but when I first saw it a year ago, I went, ‘This is really well written’, and if I was doing security in any industry and I really didn’t know where to start, or if I was the CEO and I wanted to ask some questions of the guy running technology, or lady running security, you could get that out and go, ‘Tell me how we get to the point in here?’ You have a pretty good handle about whether or not people are paying attention. 

 

That’s really helpful advice, it’s going to be worth the price of a listen alone for most listeners.  

David, as always, it’s a pleasure and it’s been extraordinarily informative, I think your insights continue to be relevant and valuable, and I really want to thank you for taking some time to speak with us. 

No problem Ed, it’s been an interesting journey and it’s going to continue to be interesting for years to come, so thank you very much. 

 

For everybody listening, this is Ed Maguire, Insights Partner at Moment Partners, and that was our interview with David Bauer who is a managing partner at Sandhill East. If you have any further questions please reach out to us, and we’ll put links to the resources in the show notes as well. Thanks a lot. 

 

Subscribe to Our Podcasts