Enterprise Resource Planning

ERP Journal on Ulitzer

Subscribe to ERP Journal on Ulitzer: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get ERP Journal on Ulitzer: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


ERP Journal Authors: Steve Mordue, Elizabeth White, Automic Blog, Louis Nauges, Progress Blog

Related Topics: Security Journal, ERP Journal on Ulitzer

Blog Feed Post

Security as Code By @LMacVittie | @CloudExpo [#Cloud #Microservices]

One of the most difficult things to do today is to identify a legitimate user

Security as Code

One of the most difficult things to do today, given the automated environments in which we operate, is to identify a legitimate user. Part of the problem is that the definition of a legitimate users depends greatly on the application. Your public facing website, for example, may loosely define legitimate as "can open a TCP connection and send HTTP request" while a business facing ERP or CRM system requires valid credentials and group membership as well as device or even network restrictions.

This task is made more difficult by the growing intelligence of bots. It's not just that they're masquerading as users of IE or Mozilla or Chrome, they're beginning to act like they're users of IE and Mozilla and Chrome. Impersonators are able to fool systems into believing they are "real" users; human beings, if you will, and not merely computerized artifacts. They are, in effect, attempting to (and in many cases do) pass the Turing Test.

In the case of bots, particularly the impersonating kind, they are passing. With flying colors.

Now, wait a minute, you might say. They're passing by fooling other systems, which was not necessarily the test devised by Turing in the first place which required fooling a human being.

True, but the concept of a system being able to determine the humanness (or lack thereof) of a connected system in the realm of security may be more valuable than the traditional Turing Test. After all, this is security we're talking about. Corporate assets, resources, access... this is no game, this is the real world where bonuses and paychecks are relying on getting it right.

So let's just move along then, shall we?

The problem is that bots are getting smarter and more "human" over time. They're evolving and adapting their behavior to be like the users they know will be allowed to access sites and applications and resources. That  means that the systems responsible for detecting and blocking bot activity (or at least restricting it) have to evolve and get smarter too. They need to get more "human like" and be able to adapt. They have to evaluate a connection and request within the context it is made, which includes all the "normal" factors like agent and device and application but also includes a broader set of variables that can best be described as "behavioral."

This includes factors like pulling data from an application more slowly than their network connection allows. Yes, systems are capable of determining this situation and that's a good thing, as it's a red flag for a slow-and-low application DDoS attack. It also includes factors like making requests too close together, which is a red flag for a flood-based application DDoS attack.

Another indicator, perhaps, is time of day.

Yes, that's right. Bots are apparently more time-sensitive than even we are, according to research that shows very specific patterns of bot attacks during different time intervals:

According to Distil Networks, the United States accounted for 46.58 percent, with Great Britain and Germany coming in second or third with 19.43 percent and 9.65 percent, respectively.

Distil Networks' findings are based on activity that occurred between January and December of 2013.  Among its customers in the United States, bot attacks occurred most between 6 pm and 9 pm EST, when nearly 50 percent of all bad bot traffic hit sites. The period between 6pm and 2 am EST was home to 79 percent of all attacks. By comparison, the 14-hour time span from 3 am to 5 pm EST saw just 13.8 percent of all malicious bot traffic.

-- Bad Bot Percentage of Web Traffic Nearly Doubled in 2013: Report

So what does that mean for you, Security Pro? That means you may want to be more discriminating after official business hours than you are during the work day. Tightening up bot-detection policies during these known, bot-dense hours may help detect and prevent an attack from succeeding. So all you have to do is implement policies based on date and time of day.

What? That's not that hard if you're relying on programmability.

Security as Code: Programmability

We make a big deal of programmability of APIs and in the data path as a means to achieve greater service velocity but we don't always talk about how that same automation and programmability is also good for enabling a more adaptive infrastructure.

Consider that if you can automatically provision and configure a security service you should be able to repeat that process again and again and again. And if you're treating infrastructure like code, well, you can use simple programmatic techniques to pull the appropriate piece of code (like a template or a different configuration script)  and deploy it on a schedule. Like at the end of business hours or over the weekend.

By codifying the policy into a template or configuration script you ensure consistency and by using automation to deploy automatically at pre-determined times of the day you don't have to task someone with manually pushing buttons to get it done. That means no chance to "forget" or scrambling to find someone to push the buttons when Bob is out sick or on vacation or leaves the organization.

Consistent, repeatable and predictable deployments is as much a part of automation as speed and time to market. In fact, if you look at the origins of lean manufacturing - upon which agile is based - the goal wasn't faster, it was better. It was to reduce defects and variation in the end product. It was about quality.

That's the goal with this type of system - consistent and repeatable results.  Quality results.

Now, you could also certainly achieve similar results with data path programmability by simply basing policy enforcement off a single time-based conditional statement (if time > 5pm and time < 8am then [block of code] else [block of code]). A data path programmatic approach means no need to worry about the command and control center losing connectivity or crashing or rebooting at the wrong time; and no need to worry about the repository being offline or disk degradation causing data integrity issues. But changing the policy directly in the data path also has potential impacts, especially if you need to change it. It's in the data path, after all.

Your vehicle of implementation is really up to you.

The concept is really what's important - and that's using programmability (in all its forms) to improve agility without compromising on stability, even in the security realm. Because when it comes to networks and security, the blast radius when you get something wrong is really, really big. And not being able to adapt in the realm of security means you're fall further and further behind the attackers who are adapter every single day.

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.