October Meet-Up News: "How Cloud-Based Data Warehouses are Changing Analytics"

Hack_Reduce Meet-Up-8.png

Join us at our next meet-up, “How Cloud-Based Data Warehouses are Changing Analytics,” on October 22nd.

A big thank you to Rapid7 for hosting us and ODSC for helping us plan this event.

If you would like to attend, please fill out the form above. Registration is free. 

If you are unable to attend our next meetup but would like to join a future one, subscribe to our mailing list for notifications. 

August Meet-Up Recap: The Power of Combining BI Analytics and Machine Learning

Data scientists spend a majority of their time completing automated tasks: cleaning, transforming, and locating their data. With machine learning proving that it has the credentials to take on these responsibilities, where should they really be redirecting their focus? 

Data governance.

At our last meetup, Daniel Gray, Senior Director of Corporate Sales Engineering at AtScale, talked all things BI, but more importantly shared his thoughts on loose definitions. What are the risks and just how far can they set your organization back? Gray believes that if your business intelligence team and your machine learning team are working on the same projects with different definitions, you won’t get the results that you want as “the chances that everyone throughout the entire organization defines the metrics exactly the same way is unlikely to happen.” 

So, where is the middle ground? The solution lies within the semantic layer. 

Gray explains that with a semantic layer, “you want to build your dimensions and hierarchies one single time, and inherit those in all of your different BI tools.” As this “will allow you to have governance and you won’t have mismatched data at the end of your cycles.” By exposing the semantic layer, you are empowering your organization to create an abstraction between your data warehouse lakes and the people that are consuming that information, enabling your team to maximize results. 

Using these tips, how are you going to elevate your organization? 

Schools Get Detention When It Comes to Cybersecurity

This is guest post written by Hack Secure's Matt Lynch, a recent graduate of Bentley University. Check out his perspective on how educational institutions can do a better job preparing their students for the very real cyber threats they face.

While at school, there is one thing I had always felt, safe. The campus had its own Police Department and emergency crews were no more than a few minutes away at any given time, a fact that was tested several times given the number of new cooks. However, looking back after my time working within the cybersecurity industry, I feel I may not have been as safe as I had thought. While they may have been doing everything they could to physically protect, I was left vulnerable online. Cybercrime is an ever-present threat to today's society, and schools are the new target that cybercriminals are exploiting.

Hackers are getting smarter every day, developing new methods and techniques to break into systems undetected. As companies begin to view cybercrime as the threat that it is, they realize that their old endpoint protection from software like McAfee, Sophos, Norton, and others just does not cut it anymore. They are now beginning to look towards scaling their security at the same rate that the hackers develop their new tricks. They are doing this through software developed by the likes of Carbon Black, CrowdStrike, and Cylance.

Schools, on the other hand, continue to use antiquated software every day. Even a smaller university with a few thousand students and a couple of hundred faculty members has potentially tens thousands of unsecured endpoints at any given moment. Most schools will provide a computer for each student and faculty member with Sophos, McAfee or Norton pre-downloaded on it as its default anti-virus protection. Students and faculty alike will also often bring their personal computers, tablets, and phones with them, all of which are connected to the school's network. Being that the user of most of these devices is most likely a young adult between the ages of 18 and 22, it is not crazy to assume that they may be used to enter some less than legitimate sites to stream Game of Thrones or watch a basketball game. These sites often pose a high risk of containing malware, which is not always detected by over the counter security products. This type of attacks is one of the passive ways that hackers can get into the system because the school's endpoints have either the minimum protection or even no protection at all. 

It is now time for schools to realize that they are in fact a business and that they need to act like it. All schools hold mountains of valuable information to hackers. They have records of all of their customers' and employees' social security numbers, bank accounts, credit cards, and addresses. If a hacker were to hack a school, all of the students, their parents, and the faculty would be at risk of identity fraud, credit card fraud, and several other crimes.  By switching over to more modern tactics of endpoint security, schools will be less vulnerable to attacks as it will limit the potential of future attacks by making it more difficult for the infiltrator to break into the network in the first place, and detect it faster if a breach does occur.

What companies like Carbon Black do is make it simple for large businesses such as schools to get high-quality, next-gen anti-virus software, by having it be one agent, one console, and cloud delivered. The next-gen antivirus will automatically detect ransomware, malware, and non-malware attacks on any of the endpoints connected to the network. The agent that is on the device will then send it through the cloud, to the console, which will be under the control of the head of cybersecurity. From there, they can decide to shut down the device remotely to prevent any harm from being done to the network.

If schools were to invest the money in next-gen cybersecurity detection products, then they will significantly decrease the likelihood of any cyber attack from happening at all, and reduce the potential risk from any attack that does occur; allowing them to worry about education first, and safety second.

ODSC Panel - How Data Science is Opening New Frontiers for the Insurance Industry

Last Tuesday we heard from four panelists working in the insurance industry on how data science is transforming their businesses:

Marc Light (BitSight) - Director of Data Science

John Langton (Wolters Kluwer) - Director of Applied Data Science

Andrew Campbell (Sun Life Financial) - Director of Analytics and Insights

Satadru Sengupta (DataRobot) - GM and Data Scientist for Insurance

Moderated by Bobby Brennan who runs a data science consulting firm in Boston.

The panel began by discussing an exciting and important line of insurance that has recently emerged: cybersecurity insurance. Marc talked about how BitSight works with insurers to determine a company's risk of being breached; by creating automated tools for probing a company's defenses - without needing access to the company's internal resources - BitSight is able to accurately measure their level of security, allowing insurers to make informed decisions on whom they should underwrite and for how much. John also drew from his experience at VisiTrend and Carbon Black to discuss what he saw as unique challenges in measuring cybersecurity risk.

A common focus for each of the panelists is the way humans and machines can interact to create positive outcomes. Insurance decisions often carry large financial burdens and can have a huge impact on the livelihood of individuals and businesses, so it's crucial that the decision making process retains a human component. Andrew spoke about how his team enables human actuaries to make more informed decisions by drawing on machine-driven analysis. The panel seemed to be in agreement that together, humans and machines can drive better outcomes than either alone.

The panel concluded with a general discussion on the impact of data science in the insurance industry, both at the present moment and moving forward, and each of the panelists agreed that data science has been nothing short of transformative. Satadru pointed out that machine-driven statistical insights had ameliorated billions of dollars worth of insurance fraud committed every year, and expressed hope that we'd only just scratched the surface of what's possible. The consensus was that, while data science has had a massive impact on the insurance industry, the focus thus far has been on relatively simple methodologies and easily accessible data. Each of the panelists agreed that there are still vast improvements on the horizon, particularly as we uncover new data sources and learn to capture more signal from unstructured data.

Thank you to Marc, John, Andrew, Satadru and Bobby for an engrossing discussion and some fascinating insights on how data science is transforming insurance.

From left to right: Bobby Brennan, Marc Light, John Langton, Andrew Campbell, Satadru Sengupta

From left to right: Bobby Brennan, Marc Light, John Langton, Andrew Campbell, Satadru Sengupta

Survey: The State of Cyber Security Hiring in Boston

Are you looking to hire entry level cyber security practitioners? Help the Boston community understand the type of traits you look for in a job candidate by participating in our short cyber security hiring survey.

In collaboration with Northeastern University's College of Computer and Information Science, we've developed a short 7 question survey to help candidates understand how they can be best prepared to work at your company!


"Severless" Architecture: The Risk of Going Serverless and Why It's Worth It

Cyber Security Practitioner Series brought to you by:


In this week's interview for the Cyber Security Practitioner Series, we talked with Tom McLaughlin from CloudZero about serverless architecture.

Tom talks about what serverless architecture, how it is utilized, the risks it poses as a security issue, and why it is still worth it in the end.

Tell us about yourself, your background, and how it pertains to serverless architecture.

I’m an operations engineer by profession, which in lay terms translates to, “I make the cloud run.” While software engineers are writing the products that we use, I’m the person who’s been responsible with ensuring that these engineers are able to deliver features and ensuring a stable and reliable service so users (or customers) are able to use those features.  Your killer product feature is useless if customers can’t access it reliably.

These days I do developer relations (DevRel) for an early stage startup, CloudZero, that is building a site reliability platform on an AWS serverless architecture.  I engage with our market as an engineer peer to discuss the issues we’re solving, like site reliability and serverless, and learn from those engineers how we can solve their problems better.  The work is (and I’d argue if it’s going to be done right) should be a mixture of Engineering and Product functions with Marketing strategy and tactics mixed in.

What exactly is serverless architecture and why does it matter?

“Serverless” is currently one of the most nerd-rage inducing terms; in a tie with “observability”. (We’ve grown tired of arguing what “DevOps” is and fortunately new terms have come along.) We call it “serverless” because because the host layer (server) has been abstracted away from us and is entirely handled by the cloud provider.  We have no responsibility for host maintenance in this model.  The general maintenance tasks we’re used to, e.g. OS patching, performance monitoring, debugging is handled by the cloud provider and opaque to us.  This is a good thing because it forces us to focus our effort on technology that advances our core business.  More resources can be spent on developing the product features that increase adoption than patching your hosts.  No one buys your product because of your internal patch management strategy. They buy your product becuse it does something useful.

What differentiates this from a PaaS is the execution model of the technology.  With PaaS platforms you’re paying for hosting and you’re whether people are using your product or not. This is not the case with serverless platforms.  With serverless you pay when your service is actually used.  A serverless system costs you nothing if no one is using it.  Your bill has gone from a capacity based model (paying for resources to support a theoretical load) to a capacity based model (how many people are actually using my service.)

We could have called it Jeff.

What has adoption been like for serverless architecture?

Serverless adoption is still in the early phase.  Think of maybe where containerization (eg. Docker) was 3 or 4 years ago or even AWS public cloud 7 or 8 years ago.  Serverless is I think still the domain of the early adopters.  I joke that much of the leaders in the serverless ecosystem can all be found at the ServerlessConf events.  Think of that, a single event can still draw most of the leaders in that space.  I had severe FOMO missing this past one in NYC.  My twitter timeline was filled with the people whose twitter and blogs I follow and along with people I regularly engage with on a dedicated serverless Slack group.

The organizations I’ve found to be adopting serverless are quite diverse.  They range from large companies like Nordstrom, to sizeable companies like iRobot, and finally early stage startups like CloudZero.  Because of the leap serverless provides over microservices (which containerization doesn’t provide) I think you will begin to see many organizations come to a fork in the road as they look to modernize their software stack and IT services delivery; go primarily containerization or go primarily serverless.  More aggressive organizations may decide to leapfrog over containerization and directly into serverless.  While moving applications to containers may be easier up front than re-architecting them for serverless, containerization comes with the overhead of maintaining container management platforms.  AWS provides ElasticBeanstalk where you can host your Docker containers but people will still need platforms like Kubernetes to handle large scale container deployment.  Contrast that with serverless where you’re making a conscious decision to offload the platform management to your cloud provider. If you’re no longer trying to operate a containerization platform you can perhaps redirect those engineering resources towards your application re-architecture efforts.

For some people this time may be too early for them to care about serverless.  For others this is exactly the right time.  The architecture changes bring so many new opportunities and questions that are waiting to be solved.

What potential security risks does this technology pose? Do you feel the rewards outweigh the potential risks? Why?

With every new layer of cloud abstraction you have to ask yourself how comfortable you are with outsourcing a part of your security.  There are people who believe they can provide physical security to a data center better than public cloud providers.  There are people who believe they can provide better security at the virtualization layer (eg. preventing cross VM or cross container attacks) better than the public cloud providers.  The same questions arise for serverless. Underneath AWS Lambda is a container that AWS manages.  Do you feel you can keep engineer a more secure container that is patched appropriately better than Amazon?  And is your time spent on that a better use of time than addressing other issues?  You have to ask yourself these questions and provide realistic answers.  Just because you own and control something in no way means you can do it better.  I legitimately trust public cloud providers to do a lot of this work better than me.  They hire specialists for this work for for me I’m at best a generalist.

If you make the jump to serverless a positive I see coming is increased focus on application layer security.  Ask yourself at what maturity a company starts doing application security and pen testing.  Ideally they start doing it when they have the time and resources and have addressed other lower hanging fruit and more damaging issues in their environment.

If you’re no longer managing and patching OS vulnerabilities, redirect you time to patching application dependency vulnerabilities.  A service like Snyk is I think poised to make a major impact in the serverless space.  It’s an easy to use service that for me, as a non-security specialist, can get started with.

If you’re no longer worried about ensuring that your NoSQL platform is properly patched and not exposed to the internet, refocus your efforts on ensuring your AWS S3 buckets aren’t publicly exposed. Or better yet, focus on application pentesting earlier.  I would love a service that constantly probed my infrastructure for vulnerabilities.  It’s not that I didn’t want it before, it’s that with serverless architecture I now have time to potentially make use of the data the service found.

By allowing your public cloud provider to address more of the security of your stack you can focus on the more sophisticated security issues earlier.  This is a good thing for security so long as people recognize the opportunity to fill this new space.

Is there anything that you have not gotten to talk about that you feel is important for people to know?

I just want to reiterate that serverless really is different and represents a major jump in cloud computing compared to what we’ve seen with containers.  The architecture is so different from what we’re used to that it’s mass adoption leads to so much potential disruption.  You can look at almost every area of cloud technology and given enough thought and time see endless possibility.  The shift from capacity to consumption based billing I think will lead to the measurement of cost more closely with how we measure performance.  In fact, I think cost will more directly factor into performance choices.  We’re still building management tools that are configuration file based, but I can see a trend towards more graphically oriented design.  The opportunities are endless in this area if you take the time to rethink ideas instead of just implementing what we’ve been doing.  If you’re having a hard time coming up with ideas, I touch on some areas in my presentation “Serverless Ops: What do we do when the server goes away?”


Hack Secure Dinner Series: Security of the Blockchain (51:37)

Cyber Security Practitioner Series brought to you by:


Hack Secure's first dinner series was headlined by Professor Brian Levine of The College of Information and Computer Sciences at UMass Amherst.

Brian's talk focused on blockchains, and how blockchain-based cryptocurrencies are quickly advancing from simply supporting financial transactions to hosting advanced software services and initial public/coin offerings. He discussed the security of using blockchains for those purposes. He also explains the basic operation and assumptions of blockchains, such as Bitcoin and Ethereum, and describes the successes of these platform, as well as the attacks that these systems have suffered.

He then took a look at a few specific cases. For example, in May 2016, an Ethereum-based service called "The DAO" was created as a type of decentralized hedge fund. It raised over US$150M worth of ether during a crowd sale. By June 2016 an attacker began stealing ether from The DAO, but not due to a flaw or vulnerability in Ethereum; rather it was a flaw in the DAO's programming. Also discussed is how in July 2017, a flaw in a software "wallet" for Ethereum allowed an attacker to steal US$30M from some users.

If you have any questions for Brian feel free to contact him:

DNS Analytics, What Is It and Why Is It Important?

Cyber Security Practitioner Series brought to you by:

RevAd Freight Fixed.png
chris mcnab.jpg

For our next installment in the Cyber Security Practitioner Series, we interviewed AlphaSOC co-founder Chris McNab about DNS (Domain Name Server) analytics, it's importance, and what AlphaSOC is doing about it. 

Chris discusses his Splunk app, DNS Analytics for Splunk, and how AlphaSOC uses it to find anomalies and malware by analyzing the DNS logs. 

Tell us a bit about yourself, your background and what you're currently working on.

I'm a co-founder of AlphaSOC and author of Network Security Assessment (O'Reilly Media) which is a penetration testing title in its 3rd edition! I've worked in the security industry since 2000 on the consulting side of things, focusing on assessment work, and in recent years a lot of incident response and forensics. I tracked Alexsey Belan a few years ago, and put together a blog post recently describing his TTPs (https://medium.com/@chrismcnab/alexseys-ttps-1204d9050551) after it was publicly known he was associated with the massive Yahoo hack. We set up AlphaSOC back in 2013 upon realizing that DNS was a reliable and inexpensive channel to pay attention to when flagging malware and lateral movement in large networks.

What is DNS analytics and why is it important?

DNS Analytics for Splunk is the flagship AlphaSOC product that we've been working on and continuously improving since 2013. It's an app, for Splunk and non-Splunk environments,  that takes minutes to deploy and will instantly flag anomalies and malware within an environment by processing DNS logs. The analytics engine is actually platform agnostic. While many of our customers use Splunk, we support non-Splunk environments as well via Network Flight Recorder (https://github.com/alphasoc/nfr) which is a lightweight Linux command-line utility. Most security products used within a SOC perform one-dimensional correlation of threat intelligence feeds, flagging traffic to known bad domains. DNS analytics performs three-dimensional scoring using behavioral analytics and timing analytics to flag anomalies, emerging threats, and malware without signatures. For example, we're able to programmatically flag DGA traffic and DNS tunneling using analytics alone (versus threat intelligence feeds), and highlight odd traffic patterns (e.g. beaconing to a young domain with a suspicious TLD).

What are the threat intelligence feeds you use in your product and how does that help identify threats? Any specific examples you could touch on?

We curate our own threat intelligence through investigating the alerts within the system and marginal hits (e.g. young domains and FQDNs known to sandboxing engines). As such, we're able to categorize adware, unwanted programs, third-party VPN packages, P2P traffic, and malicious traffic patterns to C2 domains. Of the alerts we serve to users, only a small percentage are generated using a threat intelligence correlation, and the majority are generated by the analytics stack to highlight suspicious queries within the larger DNS dataset (which is often millions of events per day). As we improve the classifiers and analytics engine, we actually become less and less dependent on threat intelligence, which isn't a bad thing.

How much malware actually uses DNS for command and control?

According to research by Infoblox and BlueCat Networks, around 95% of malware families use DNS for command and control (C2). Even state-sponsored malware samples such as Stuxnet have been found to use DNS for C2 purposes. DNS has been found to be a reliable channel to pay attention to when identifying infected hosts within an environment.

How does AlphaSOC flag infections that don't generate DNS traffic?

To cover the small blindspot that remains (exploited by the 5% of malware families not using DNS), we've released IP Analytics for Splunk to flag anonymized circuits (e.g. Tor, I2P, and Freenet) and traffic to IP addresses which are known C2 and sinkhole destinations.

How can someone get started leverage AlphaSOC analytics to help protect their enterprise?

If you have Splunk, the DNS Analytics (https://splunkbase.splunk.com/app/1657/) and IP Analytics (https://splunkbase.splunk.com/app/3721/) apps are free to download and evaluate for 30 days without restriction. By using the tools to process your network logs, you can flag known and unknown malware, emerging threats, and policy violations (e.g. third-party VPN use, P2P traffic, cryptomining, and other threats). The visibility provides a lot of insight into what's going on within large complex enterprise environments. If you don't have Splunk, take a look at Network Flight Recorder (https://github.com/alphasoc/nfr) which is our Linux command-line utility to score DNS traffic and get in-touch with us to discuss your requirements! The analytics API and feeds that we provide can be consumed easily and integrated with SIEM and orchestration platforms.


Information Security as a Revenue Driver for the Enterprise

Cyber Security Practitioner Series brought to you by:


We recently interviewed Brian Castagna for our Cyber Security Practitioner Series on the topic of how enterprise organizations should view their information security programs as a revenue driver as opposed to a cost center.

Brian shared his wisdom with us on his approach to revenue driven security programs, and how he uses this while serving as the Director of Information Security at Oracle Bare Metal Cloud.

Tell us a bit about yourself and your current role.

I’d like to start this Q&A with a confession.  I’m trusting you as the reader with my secret.  (in a whisper) “I used to be an auditor”. Sssshhh, don’t tell anyone. Yes, I was one of those smug 22 year olds that cost $200 an hour who asked you “what’s Linux?”.   I started my career as an IT Auditor performing SAS 70, PCI DSS and ISO 27001 audits at various public accounting firms including KPMG, PwC and Shellman.  And while I jest, there is tremendous value in building information security programs in starting with a strong foundation of IT general controls - access, authentication, change management, backup, and monitoring.

After 8 years of evaluating the security and controls of technology service providers, I realized I wanted to do more than just find the security issues,  I wanted to fix them too. For the past 5 years I’ve been building information security programs at venture backed technology companies including Jumptap, Acquia and Dyn.

In my current role, I lead the information security program for Oracle Cloud Infrastructure (OCI) Edge Services. Formerly Dynamic Network Services (DYN),  OCI Edge Services runs DNS, Monitoring and Email services for the edge of Oracle’s V2 Cloud.

Organizational leadership teams often make information security investment decisions to prevent or respond to a security breach. Should this be the primary driver for information security investment?

Information security is a great case study in human behavior.  We are a reactive species.   Why did you get that new home security system?  Because a robber just broke into your house.  Why did you start eating healthy, and stopping drinking cokes, eating Oreos and fried food?  Because you now have type 2 diabetes.  Why do organizations make significant increases in information security investments?  Because they just had a major security breach.

A common attitude among corporate executives is the following:

“Why would I invest money in information security when we haven’t had a security breach?  And if I did invest money in information security, it’s really just an insurance policy to protect against a cyber attack.”

This is the wrong line of thinking in my opinion.  This type of attitude has contributed to the myriad of breaches we see in the news every day.

Here are four areas that I believe should be drivers for information security investment:

  1. Revenue:  It’s the money, stupid.  What if information security was an implicit or explicit revenue center?  What if you used metrics to directly tie information security  to increases in revenue?  People respond to money.  If investments in information security could open up new segments of the market such as healthcare, government or e-commerce, that is a eye opening pitch to executives vs. we need to protect against X scary event in the future.

  2. Shorten Your Sales Cycle:  Are you living quarter to quarter?  Anxious to close that seven figure enterprise deal to secure your next round of VC funding?  If you are able to meet or exceed your customer's security expectations this will shorten your sales cycle with the security and legal hurdles found at larger enterprise customers.

  3. Marketplace Differentiation:  Customers of cloud service providers demand a strong security story.  If you can articulate your security to customers in a confident, but not boasting manner - you will get more customers than your competition.

  4. Nature of the Business & Data:  What you do for a business, and the types of customer data you maintain should have a strong influence on the level and type of information security investments your organization makes.  For example, you are a Fintech startup and take on personally identifiable information and bank account data in the cloud.  Your customers (banks) require security.  Regulators (SEC, privacy laws) require security.  Auditors require security (external, customer auditors).  You require security, because you need to meet the needs of customers, regulators, auditors and most importantly to grow and mature your business.

How do you approaching building information security programs to drive revenue?

I take a customer centric view when I build information security programs.  With that lens, it enables me to get more buy-in within the business driven departments at an organization from executives, customer support, sales, account management and product.  A customer centric security program is a win not only for the business in driving revenue, but for security teams as well - as enterprise customers have expectations much more stringent than compliance standards.  Here are some of my focus areas to drive revenue:

  • Compliance:  As a former auditor, I have a love hate relationship with compliance. Love because foundational IT general controls bring a baseline level of structure and health to an organization.  That makes me happy :). Hate, because compliance is often window dressing, with insufficient focus on mitigating the relevant threat models to a particular business - be that strong vulnerability management or security incident response. Out comes the sad face :(. The reality is, compliance is now table stakes.  If you want to sell to mid-market or enterprise, you need the acronyms: SOC 1, SOC 2, SOC 3, ISO 27001, PCI DSS, HIPAA, FedRAMP, etc.

  • Customer Visibility:  Customers want visibility into the security of your product or service beyond the audit reports and questionnaires.  Figure out a way to provide them that visibility, and you will break down sales barriers.

  • Answer the Hard Questions:  Gone are the days of easy security questions from enterprise customers.  I completed a 420 question security questionnaire the other day.  If you can answer the hard security architecture and configuration questions well, it will help you get that top 20-30% of revenue that’s been elusive to your business.

  • Charge for it:  Why hello Mr Customer.  We are offering three product models Bronze, Gold and Platinum.  The platinum offering comes with these five additional security features and services.  Which product do you prefer?  The customer likely has to get past his own corporate security team and make his boss happy.  Security should be an easy upsell.

  • Internal SLA’s:  Go hard.  Make your security team service providers.  Respond quickly with internal SLA’s on requests from customer support, account management, and sales.  Not only will you be making friends and kissing babies within peripheral business units, but you will make customers happy.

  • How does an information security program impact a company's enterprise value?

A properly designed and implemented information security program increases enterprise value. There are implicit and explicit benefits to having the right level of security, structure and control.  

Implicit examples include things like new hire, termination processing, and background checks.  Having functional, and ideally automated baseline IT general controls will save your entire company time and money.  There is tremendous value in making security easy and automated. In a recent conversation I had with with the CISO of a Boston tech company, he made the decision to only allow third party technology vendors that integrate with his company's single-sign-on system.  That’s a great example of a security policy that is driving implicit enterprise value where dozens of security administrators are not required to manage access to 90 + third party applications..  

A more explicit example is opening up a new market segment.  For example, as a cloud service provider you cannot do business with the Federal Government unless you have FedRAMP compliance. Get FedRAMP, and open up a market segment where the revenue, and resulting increase in enterprise value can be explicitly tied to your efforts as a security professional.

How do you approach building security teams?

Building high performing security teams is both challenging and exciting.   There is an huge talent gap for the required information security skill sets, particular in security architecture, security engineering, and security incident response.  Couple that talent gap with the need for a blended skill set of technical and people skills, and you find yourself on a unicorn hunt.

I build security teams with skill sets that complement each other.  For example, some team members have a technical focus, or a people focus, or a queue based focus, or a project based focus.  I approach team building by recognizing strengths & weaknesses, orchestrating the use of those strengths, and equipping my team with the right message and tooling to effectively execute.

When is the right time for a company to build out an information security function?  Why?

To answer this question, we first need to evaluate the applicability of the information security investment drivers discussed above. What’s the target market for customers? Nature of the product and data?  Risks to the business?  Based on the answers to those questions, it’s easier to build out a roadmap or staffing plan for security.

However, herein lies the challenge for building the security team.  Often this question is driven by customer compliance requests - such as a SOC 2 audit, and not driven by a meaningful business strategy.  If I had a nickel every time a recruiter messaged me on linkedin stating a company needs an information security director to get them SOC 2 compliance, I would be a rich man.

So, how do we answer this question?  Let’s start with some simple yes and no questions:

  1. Are you a SaaS, PaaS, or IaaS provider?

  2. Do you operate in the Cloud (e.g. AWS, Google, Azure, Oracle)?

  3. Do you want to sell to mid-market and enterprise customers?

  4. Do you want to sell to regulated industries or geographies - healthcare, financial services, government, e-commerce, European Union.

  5. Do you take on sensitive customer or consumer data - intellectual property, source code, PII, credit card data, bank records, and/or strategy documents?

If you answered yes to #1 above - you should likely hire an information security resource(s) by the time you are 200 people.

If you answered yes to #1 and #2-5, you should hire an information security resource(s) between 50-150 people.   The more questions you answered yes to, the closer you should be to hiring for information security after 50 people.

A common misconception is that security is one person job, and you just need one manager, director or CISO.  Information security is not a person, it is going to be a team where the scope, scale and timing of building that team depends on the nature of your business.

Hack Secure Dinner: How Secure are Blockchains for Supporting Financial Transactions, Software Services, ICOs and Beyond

The goal of Hack Secure is to help educate the cybersecurity community on as many issues and ideas as we possibly can. In that vein, we like to host intimate dinners with cybersecurity practitioners and executives to discuss current topics.

Our next dinner will be highlighted with a talk given by Professor Brian Levine of The College of Information and Computer Sciences at UMass Amherst. (If you're interested in attending a future dinner, please reach out to us below.)

brian ps 2.png

Brian's talk will focus on blockhains, and how blockchain-based cryptocurrencies are quickly advancing from simply supporting financial transactions to hosting advanced software services and initial public/coin offerings. He’ll discuss the security of using blockchains for those purposes. He will also explain the basic operation and assumptions of blockchains, such as Bitcoin and Ethereum, then describe the successes of these platform, as well as the attacks that these systems have suffered.

We will be taking a look at a few specific cases. For example, in May 2016, an Ethereum-based service called "The DAO" was created as a type of decentralized hedge fund. It raised over US$150M worth of ether during a crowd sale. By June 2016 an attacker began stealing ether from The DAO, but not due to a flaw or vulnerability in Ethereum; rather it was a flaw in the DAO's programming. Also to be discussed is how in July 2017, a flaw in a software "wallet" for Ethereum allowed an attacker to steal US$30M from some users.

If you would like to attend this event, or any future events being held by Hack Secure, please reach out to us below: 

Name *


Ryan Nolette is a security technologist and threat Hunter at Sqrrl Data, which markets software for big data analytics and cyber security. In this lightning talk, Ryan gives an overview of the threat hunting process, and recommends visualization methods that expedite the process.

Ryan begins the discussion by showing what the process is currently like without visualization; it is monotonous, tedious and inefficient. By recognizing that humans are visual beings and naturally attuned to finding patterns, Ryan demonstrates how utilizing a visualization tool can save both money and time for security professionals.

It is clear that humans are visual learners, and Ryan puts together a very cohesive lightning talk that puts this into persecutive in a security context. By eliminating the tedious and repetitive actions, security professionals can find threats in a fraction of the time compared to conventional log crawling methods.


Brian Carrier (@carrier4n6) is the Vice President of Digital Forensics at Basis Technology, a software company specializing in applying artificial intelligence techniques to understanding documents and unstructured data written in different languages. In this lightning talk, Brian gives an overview of his experiences in using and designing open source security tools.

Brian begins his talk with a little about his experience in security, and how security tools were very limited early on. When Brian was still a student, Dan Farmer and Wieste Venema released The Coroner’s Toolkit (TCT), and from there, Brian built on top of that to deliver a more friendly user experience, resulting in Autopsy. He then discusses the evolution of digital forensics, moving from individual tools to platform-based tools.

This talk zeroes in on the importance of the user experience in digital security and how the security space is constantly evolving. Brian focuses on the importance of extensibility in the security space, and gives real-world examples of how improving the design of security tools leads to more users.


Liam Randall (@Hectaman) is the Senior Director of Software Engineering at Capital One and the Founder and CEO of Critical Stack, a sensor delivery network. Liam’s keynote presentation gives a detailed overview of the state of open source cyber security.

Being a security professional himself, Liam’s presentation is incredibly insightful in terms of approaching the problems currently facing the cybersecurity space as a security professional, and what open source projects can do to not only help companies, but also help themselves stay one step ahead of attackers. Perhaps the most significant takeaway from Liam’s talk is the importance of application delivery within organizations, and how the use of containers, which provide modular and isolated application delivery along with backwards compatibility.

Liam delves into great detail about certain open source projects, especially the Mitre attack framework, making this talk relevant for anyone interested in cybersecurity. He also understands that agility is critical, as it drives organizations towards responding rapidly in an advanced environment, providing valuable business insight as well.


Jason Meller (@jmeller) is the CEO of Kolide, a startup that builds osquery fleet management software. In his presentation, Jason discusses the core principles and advantages of osquery, an open platform for host analysis.

There are three properties that differentiate osquery from other technologies; osquery is “platform agnostic”, meaning it can run on a wide array of machines. Osquery is also extremely scalable, as it has been used over at Facebook, demonstrating that it can run on one machine or hundreds of thousands of machines. Finally, osquery is an open source project, meaning that the community is doing much of the development and pushing the technology forward.

This lighting talk demonstrates the value of osquery as an open project, especially in security settings. While only scratching the surface of osquery, Jason does a great job explaining the factors that are making osquery one of the most important open source projects available today while painting a broad picture of the platform’s capabilities and uses.