Tufin Firewall Expert Tip #6 – How to Clean Up a Firewall Rulebase

July 19, 2010 by

Over time, firewall rulebases tend to become large and complicated. They often include rules that are either partially or completely unused, expired or shadowed. The problem gets worse if there have been multiple administrators making changes or if there are many firewalls in your organization.

When the rule base gets big and tangled, it starts to affect firewall performance.  It is difficult to maintain, and it can conceal genuine security risks. And standards such as PCI-DSS require clean-up of unused rules and objects.

With some help from our customers, I’ve put together a list of best practices for cleaning up a firewall (or router) rule base. You can do all of these checks on your own, but if you have Tufin SecureTrack you can run most of them automatically.

  • Delete fully shadowed rules that are effectively useless. If you have SecureTrack, these are detected by the Rule and Object Usage report.
  • Delete expired and unused rules and objects. All of these are detected by the Rule and Object Usage and the Expired Rules reports.
  • Remove unused connections – specific source/destination/service routes that are not in use. You can detect those using the Automatic Policy Generator to analyze traffic patterns.
  • Enforce object naming conventions that make the rule base easy to understand. For example, use a consistent format such as host_name_IP for hosts. This is an option in the SecureTrack Best Practices report.
  • Delete old and unused policies. Check Point and some other vendors allow you to keep multiple rule bases. This is another test in the Best Practices report.
  • Remove duplicate objects, for example, a service or network host that is defined twice with different names.  The Best Practices Report can identify these.
  • Reduce shadowing as much as possible. You can detect partially shadowed rules with Policy Analysis.
  • Break up long rule sections into readable chunks of no more than 20 rules. This too can be checked with the Best Practices report.
  • Document rules, objects and policy revisions – for future reference. You can do this with some vendor tools. In SecureTrack, you can document revisions, for instance, to indicate when an audit was performed. You can also link policy changes to tickets from your help desk to store additional information about the requestor, approver, etc. You can enforce a standard for rule documentation with the Rule Comments Format test in the Best Practices report.

Last but not least, some of your most important security checks also help you maintain a clean, compact rule base. If you use SecureTrack, try these:

  • Tighten up permissive rules: Run the Rule Volume Report (under /tools on the SecureTrack Web GUI) to detect rules that are too open and tighten them up with the Automatic Policy Generator.

I’m putting together a vendor-specific list of suggestions and if you have any, I’d like to hear from you.

Reuven

Help, Thief!

May 6, 2010 by

Yesterday we got robbed. Well, burgled.

Some guy entered our office, snatched four wallets and left unnoticed.

We finally realized what had happened when someone noticed his wallet was missing and, soon after, two other people as well.

I asked everyone in the office whether they had noticed anything exceptional during the day or had seen strangers in the office. However, it was tricky because we have many contractors here whom not everyone knows.

We made a list of all the people who had been seen entering the office that day, and quickly pinpointed a prime suspect. Funnily, it was a guy whom we actually met at the entrance to our office when we were going out to lunch and politely offered him assistance!

The surveillance videos confirmed our suspicions: the suspect entered using the front door code, visited some offices, and left within three minutes. Apparently he’s a semi-pro, carrying a large paper folder he pretends to read and wearing dark sunglasses in front of the cams.

24 hours later, a fourth person noticed his wallet was gone.

We’ll report it to the police today but I doubt they’ll catch him. We will also review our security procedures and update them.

So, what happened?

  1. The incident
  2. The victim realizes what happened
  3. An investigation
  4. Conclusions – we understand how the incident happened
  5. Lessons: enhance security

Does this remind you of something? It is so similar to IT security!

It’s funny, to re-read Ruvi’s recent interview at SC Magazine – it’s all about physical and IT security convergence.

I also found this useful paper by Verizon Business.

Business processes and Lego

April 22, 2010 by

As a boy I loved Lego. I’d use the red and green and white bricks that, in those days, came in just a few shapes to construct houses, ships, cars and stairways that lead nowhere. It was all about fun and imagination.

Last week, I was at the Check Point experience in London where I was demonstrating our workflow solution. It was a real delight meeting you out there and discussing our vision in light of your invaluable real-world experience (the bar was also not bad).

It was during the 1st day that I suddenly realized the analogy between our approach and Lego and how important it is in providing a good solution.

After a couple years of presenting our solution to security people from over one hundred organizations world-wide, I came to realize that there is no such thing as a standard process for managing changes to the security policy.

While one organization starts off with an access request which is then approved by a line manager, another may first want to design the change. Some want to allow requesters to specify the target firewalls while others keep them strictly within the domain of the firewall operations group. Not to mention the gazillion types of forms I have seen out there.

Of course, it would be easier if we could say “here’s how you should be working” and provide one ideal workflow but things just don’t work like that. Every organization has developed processes that match their needs and organizational structures and policies. Beyond technical constraints there are also social and political factors that have shaped these processes and they cannot be modified easily.

So instead of a single rigid process we chose to provide small building blocks that can be compiled into the organizational processes, things like:

  • Permissions and roles
  • Users and groups
  • Workflows that are composed of configurable steps
  • Forms that consist of fields such as input fields and drop down lists
  • Application flows (the requested access paths) that can change their appearance to match the needs of users with different roles
  • Dynamic yet controllable workflows so that users have flexibility within a fixed framework

This Lego approach makes our solution effective in a variety of environments with differing processes including ones we haven’t even seen or anticipated.

Now I’m doing real Lego again with my daughters but this time its princesses and castles instead of cars and ships. Yep, it’s all about fun and imagination.

Reuven.

Firewall Policy Management Webcast with Gartner’s Greg Young

March 29, 2010 by
We recently had a chance to catch up with Greg Young, Gartner’s research VP on Firewalls, and discuss the Firewall Policy Management market.

Greg spoke about several topics:

Greg Young
  • Why has firewall policy mgmt become a challenge for companies today?
  • How effectively are companies handling firewall auditing and compliance requirements?
  • What can companies do to manage their network security operations more efficiently?
  • How can automated solutions be used to manage risk?
  • Are these issues also relevant for other parts of the network infrastructure?
  • What are the advantages of using business process automation for security change requests?
  • What is the role of firewall policy management, auditing and change automation solutions for MSSPs?
  • The result is a very interesting (IMHO) 34 minute video, with Greg talking about the topics listed above, as well as a segment featuring yours truly, sharing Tufin’s vision and thoughts on this market.

    Tufin Firewall Expert Tip #5: Nasty configuration mistakes: How to avoid them and how to recover

    March 14, 2010 by

    Ongoing changes to network and security device configuration are unavoidable and necessary for business. But they are also risky. They can have unexpected consequences – from service interruptions to performance degradation and even downtime.

    How can you reduce the risk associated with configuration changes? Here is a 3-tier strategy:

    1. Reduce the likelihood of configuration errors:

    • Monitor and review changes
    • Establish change procedures and processes
    • Establish a test plan for all changes

    2. Detect problems as early as possible:

    • Monitor the environment
    • Listen to your users

    3. Prepare for a fast recovery if something goes wrong:

    • Maintain accessible, actionable audit information
    • Establish standard recovery procedures

    Finally, implementing solutions that can automate error-prone, repetitive tasks and can maintain vigilance 24 hours a day goes a long way to preventing, and recovering from, human configuration errors.

    Monitor and review changes

    Even if they look simple, all configuration changes should be monitored and reviewed. For example, suppose you’re adding a host to a network group in order to provide access, and you are unaware that the same group is used in a different place to block traffic. Another pair of eyes will often catch something you missed.

    Establish change procedures and processes

    Change requests must be communicated consistently so that the right people can review them and assess their impact. Many problems can be avoided simply with good communication. Some organizations schedule weekly change review meetings to understand and plan complex changes. But the most effective way to ensure that changes are reviewed and approved is by enforcing a change process workflow.

    Establish a test plan for all changes

    It may sound surprising, but many changes are tested for hours or days after implementation, while some are never tested at all. A test plan for every change is a critical part of the change process. Sometimes this isn’t as easy as it sounds and involves coordinating end users, business partners, and professional testers. The work you put in here will build credibility in your teams’ ability to get things right.

    Monitor the environment

    The firewall environment should be continuously monitored and abnormal behavior should automatically trigger alerts.  The firewall environment might include the operating system, the network interfaces, the firewall software, the firewall hardware, and the firewall rule base.  These should be analyzed and correlated and, if necessary, escalated for a closer look.

    Listen to your users

    A helpdesk should be in place so that users can easily report problems. The helpdesk should be manned with trained personnel and have clear processes for handling incidents. Have a plan for correlating multiple incidents to a single problem. Each team should have tools to assist root cause analysis before escalation to the next level.

    Maintain accessible, actionable audit information

    Each and every change must be documented properly and recorded in an audit trail. A comprehensive audit trail should include the target device, the exact time of the change, the configuration details, the people who were involved (requestor, approvers, implementor), and the change context such as the project or application.

    But a detailed audit trail is not enough on its own. The information must also be presented in an easy-to-read format so that you can easily access it when needed. Additionally, you’ll want to have filtering and querying capabilities on top of the data to speed up searches and lookups.

    Prepare for rapid recovery

    Now comes the incident. Despite everything, something bad has happened and you need to respond. You will be judged by the time it took to recover, so you want to be well-prepared with tools, staff and processes to handle this. You want to keep stress down to a minimum.

    If you have set up the procedures above, you are already in pretty good shape. Either you caught the problem during the change process or, if it was missed, you can discover it early, before users and services are affected. Thanks to the audit trail, you know exactly what changes have been made lately, by whom, and why. Experts agree that most recovery time is spent figuring out what changed; so if you already know, recovery times are going to be much shorter. You run some quick queries to pinpoint likely suspects and you can roll-back the changes quickly.

    If you use Tufin solutions, there are a number of tools that can help you control changes, detect problems, and recover from errors:

    • A complete audit trail with full accountability and integration with ticketing systems
    • Comprehensive change reports and side-by-side diffs for rule bases, objects and textual configurations
    • Real-time change notifications with filtering (by change type, device, affected networks)
    • Central console for viewing all recent changes across all devices regardless of vendor and model
    • A policy analysis tool for determining which firewalls and rules are blocking services across an environment
    • Rule and object change history reports
    • Business process automation to manage the change process (SecureChange Workflow) and integration with existing ticketing systems

    Let us know how you recover from configuration mistakes.

    Reuven

    Record sales and growth in 2009, market validation by Gartner

    March 10, 2010 by

    Hi everyone,

    It’s been a while since my last post…

    First off, today we announced record numbers and growth in 2009: 45% revenue growth over 2008, over 4000% aggregate revenue growth in the past 5 years, moving from 280 to over 500 customers, and much more. It was especially challenging to achieve this in a year like 2009, which was not “optimal”, to say the least. We maintained profitability and were again cash-flow positive (we’ve always been cash-flow positive, and it’s a tradition we expect to continue).

    Those of you at Tufin, as well as our customers and partners, know that the real “secret sauce” is having great people – it’s a pleasure and an honor to work with such a talented team, and I’m constantly inspired by everyone at the company.

    We are continuing our growth in 2010 with many new open positions, in sales, marketing, support and R&D – we’re always looking for great people to join the Tufin family.

    Another interesting piece I heard today was a podcast by Vic Wheatman and Greg Young, VP of research at Gartner, who specializes in network security and firewalls (podcast available for Gartner customers only). The podcast topics were mostly Next Generation Firewalls, and Firewall Policy Management. Greg indicated that the two areas of innovation around the firewall space are the NG firewalls, and of course the area of Firewall Policy Management. Firewall policy optimization and workflow were mentioned as topics that  are key, and referred to as the future of firewalls.

    Greg is absolutely right in his assessment of the market – we’ve been talking about this for the past few years, and now it seems that the concept of vendor-neutral firewall policy management has finally taken hold. The market validation is there (500 enterprise customers can’t be wrong), and the products have matured enough to be widely accepted. It’s great to have validation not just from the customers, but also from the security experts and opinion leaders out there.

    Oh yeah, almost forgot – RSA Security last week was great, here are a few pictures:

    Tufin Team at RSA Security, 2010

    Take care,

    Ruvi

    Security: Moving Beyond Firewall Configuration Management

    February 15, 2010 by

    Platen describes the ultimate network management suite with a comment from me:

    http://platen.wordpress.com/2010/02/10/security-moving-beyond-firewall-configuration-management/

    Tufin Firewall Expert Tip #4: Vendor and model-specific tips for optimizing firewall performance

    February 9, 2010 by

    Do your firewalls need a tune-up? In our last column, we looked at general ways that firewall performance can be improved to overcome problems such as high CPU utilization, low throughput, and slow applications.

    This time, we are following up with a list of vendor and model-specific tips and best practices that can help you to optimize your firewall infrastructure. Thank you to everybody who responded to the last column and sent in their own tips!

    Check Point

    1. Use networks instead of address ranges in NAT.
    2. Avoid rules with Ident.
    3. Replace nested groups by flat groups.
    4. Be aware of configurations that SecureXL templates (fastpath) cannot handle, for example, security server, or syndefender.
    5. Note that SecureXL templates can be disabled from a certain rule onwards due to certain configurations such as client auth, time objects, etc.
    6. Be aware of configurations that SecureXL cannot handle, for example:
      • FloodGate-1 (automatically disables SecureXL)
      • Rules with user authentication
      • Services with a port number range (disables connection-rate acceleration)
      • Time object associated with the rule (disables connection-rate acceleration)
    7. Be aware of SmartDefense configurations that may impact performance:
      • Network Security–>Fingerprint scrambling–>ISN spoofing
      • Network Security–>Fingerprint scrambling –>TTL

    Cisco all models

    1. Debug messages are known to affect performance.

    PIX 6.3

    1. TCP Intercept is known to impact performance.
    2. If you are not using NAT and have no DNAT communications, disable the ILS fixup.

    Cisco IOS Firewall

    1. Performance may be affected if the value of ‘ip inspect one−minute high’ is far greater than the value in the ‘show ip audit stat’ command.

    Cisco ASA

    1. Verifying TCP checksums may impact firewall performance.
    2. Ideal performance is achieved when traffic enters and exits ports on the same adapter or ports on adapters serviced by the same I/O bridge (ASA 5580).

    Cisco FWSM

    1. Deep packet inspection may cause high CPU (all inspection engines except for SMTP are handled in software).
    2. Before release 3.1, non UDP or TCP or ICMP flows are handled on a packet by packet basis. With 3.1 and higher, the FWSM creates flows in NP1 and NP2.
    3. Be aware of features that are not offloaded to network processors, they will use the CPU.
    4. Built-in ACL optimization algorithm: FWSM Release 4.0 incorporates an algorithm capable of optimizing ACLs by coalescing contiguous subnets referred to in different access-control entries into a single statement and detecting overlaps in port ranges. Note that after the optimization process, the ACL is likely to be different from the original one.

    Juniper (ScreenOS)

    1. ALG (application layer gateway) is applied globally to all policies by default but may have a major impact on performance. Disabling it on specific policies can make a significant improvement.
    2. On high-end firewall platforms, NS-5000, ISG-1000 and ISG-2000, with ScreenOS 6.2 and above, Juniper switched the default rule search algorithm from “hardware” (ASIC) to “software” (CPU). The software search algorithm provides faster policy search time compared to older versions, when the number of “rules” for a pair zone is more than 500 rules, but it could cause high CPU during policy changes.
    3. ScreenOS 6.1: using wildcard address/wildcard policy causes a performance penalty.

    Fortinet

    1. Enable only the required management features you need. If you don’t need SSH or SNMP, don’t enable them.
    2. Enable only the required application inspections.
    3. Minimize use of alert systems. If you export syslog, you may not need SNMP or email alerts.
    4. Establish auto-updates (scheduled update) at a reasonable rate. Every 4 or 5 hours should be ok on most cases.
    5. Minimize use of Protection Profiles. If you don’t need a Protection Profile on a firewall rule, don’t put it there.
    6. Minimize use of Virtual Domains and avoid them completely on low-end models.
    7. Avoid Traffic Shaping if you need maximum performance. By definition, Traffic Shaping slows down traffic.

    As usual, I’d love to have your feedback,
    Reuven.

    Tufin Firewall Expert Tip #3: Best practices for optimizing firewall performance

    January 11, 2010 by

    Is your firewall overloaded? Symptoms include high CPU, low throughput and slow applications.  Before upgrading your hardware, it is worth checking whether the firewall configuration can be optimized.

    Optimization techniques can be divided into two groups – general best practices, and vendor-specific, model-specific configurations. This column focuses on best practices. Next time, we will look at vendor-specific tips, so if you have any to share, we would like to hear from you.

    Optimizing firewalls for better performance and throughput:

    1. Remove bad traffic and clean up the network.  Notify server administrators about servers hitting the firewall directly with outbound denied DNS/NTP/SMTP/HTTP(S) requests as well as dropped/rejected internal devices. The administrators should then reconfigure the servers not to send this type of unauthorized outbound traffic, thereby taking load off the firewall.
    2. Filtering unwanted traffic can be spread among firewalls and routers to balance the performance and effectiveness of the security policy.
      1. Identify the top inbound dropped requests that are candidates to move upstream to the router as ACL filters. This can be time consuming, but it is a good method for moving blocks upstream to the router and saving firewall CPU and memory.
      2. If you have an internal choke router inside your firewall, also consider moving common outbound traffic blocks to your choke routers, freeing more processing on your firewall.
    3. Remove unused rules and objects from the rule bases.
    4. Reduce rule base complexity – rule overlapping should be minimized.
    5. Create a rule to handle broadcast traffic (bootp, NBT, etc.) with no logging.
    6. Place the heavily used rules near the top of the rule base. Note that some firewalls (such as Cisco Pix, ASA version 7.0 and above, FWSM 4.0 and certain Juniper Networks models) don’t depend on rule order for performance since they use optimized algorithms to match packets.
    7. Avoid DNS objects requiring DNS lookup on all traffic.
    8. Your firewall interfaces should match your switch and/or router interfaces.  If your router is half duplex your firewall should be half duplex.  If your switch is 100 Mbit your firewall interface should be hard-set to match your switch; both should most likely be hard-set to 100 Mbit full duplex. Your switch and firewall should both report the same speed and duplex mode. If your switch is gigabit, your switch and firewall should both be set to auto-negotiate both speed and duplex.  If your gigabit interfaces do not match between your firewall and switch, you should try replacing the cables and patch panel ports. Gigabit interfaces that are not linking at 1000 Mbit full duplex are almost always a sign of other issues.
    9. Separate firewalls from VPNs to offload VPN traffic and processing.
    10. Offload UTM features from the firewall: AV, AntiSpam, IPS, URL scanning.
    11. Upgrade to the latest software version. As a rule of thumb, newer versions contain performance enhancements but also add new capabilities, so a performance gain is not guaranteed.

    If you use Tufin SecureTrack, you can automate a number of these tasks.

    Here are a few ways that SecureTrack can help:

    1. Identify unused rules and objects with the Rule and Object Usage Report, and consider removing them. The longer the reporting period, the more reliable the rule usage status will be. Remember that certain rules, like the ones allowing disaster recovery services, are only used rarely. You can also identify and cleanup unused group members.
    2. Analyze rule shadowing with Policy Analysis. Run Policy Analysis with “Any;Any;Any;Any” to identify completely shadowed rules. These rules are redundant and should be deleted. You can re-validate the redundancy with an unused rules report.
    3. Identify the most-used rules with the Rule and Object Usage Report and move them up in the rule base hierarchy. To find the top-most location for placing a rule without affecting connectivity, run an “Any;Any;Any;Any” policy analysis query, then, for each most-used rule:
      1. If it is not shadowed, move it to any higher location.
      2. If it is shadowed, find the lowest-ranked shadowing rule with a contradictory action and place the most-used rule below that one.
    4. Other things to keep in mind when re-ordering rules:
      1. You’ll probably want to preserve the rule base structure, for example, rule grouping by service or application, source or destination, projects etc.
      2. Be careful with policies containing rules with special actions such as authentication or encryption – shadowing becomes more tricky in this case.
    5. You may also use the Best Practices Rule Order Optimization test to quickly identify candidates for relocation.
    6. Use the Automatic Policy Generator (APG) to identify and remove unwanted traffic from the firewall. Read more about APG here.
    7. Use the “Software Version Compliance Report” to control your firewall software versions.

    Last but not least, remember that optimization can have a price, too – beyond the time you’ve invested. If you are not careful, you can wind up with a rule base which is too hard to maintain.  If you have the budget, there are times when upgrading the hardware is the easiest alternative.

    In our next column, we’ll focus on ways to optimize specific models of firewalls from the different vendors. If you have any tips that you would like to share, please contact me at rh@tufin.com.

    Tufin Firewall Expert Tip #2: Analyzing Network Connectivity Problems

    November 10, 2009 by

    Network connectivity problems are some of the most common – and aggravating – for business users. With distributed systems, as soon as an application does not behave as expected, the firewall is suspect. There are many other possible points of failure – the client application, the user’s PC, intermediate switches, routers, filters, load balancers and the application itself. But, because of its nature (secretive and designed to keep people out) the firewall is a prime suspect. As a firewall administrator, you are guilty until proven innocent.

    So how can you quickly determine if the problem is due to the firewall or not?

    One approach is to analyze the firewall traffic logs. Contact the user, obtain his IP address and ask him to access the application again. Ideally, this should trigger the connection in question. Then you can review the firewall traffic logs and locate the dropped or accepted packets. How easy this is depends on the tools – unless you have a smart log browser, you may have to work with syslogs.  Normally there will be a lot of logs so a filter on the source IP and, if possible, on the destination IP or port will make things easier.

    Unfortunately, in many cases, you will find nothing. One possible reason is that the rule that allows or blocks the relevant traffic is not configured to generate logs. Another possibility is that you are not looking at the right firewall or are simply missing the relevant logs.

    Another method is to analyze the firewall rule base. In many cases this is not feasible due to the size and complexity of the network and firewall policies. But if the network is relatively small and you know the rule base very well, you may be able to narrow the problem down to a specific rule or to a recent change that might have affected the application flow.

    If you have Tufin SecureTrack, you can use the Policy Analysis tool to query the rule base. Get the user’s IP address, the IP address of the application and the service or port, if possible. Log into SecureTrack and create a policy analysis query with these inputs. You can run the query on all firewalls or, if you are sure which ones are relevant, on a subset. For the report, you can choose to show all traffic or only dropped or accepted traffic.  SecureTrack does not send any packets over the network. It analyzes its own copy of the rule base, which is always up to date from continuous monitoring. Since SecureTrack does not depend on traffic logs, it doesn’t matter whether log data is missing or unavailable.

    Policy Analysis will quickly determine whether the firewalls are allowing the user’s traffic or not. If it turns out that the firewall is, in fact, blocking traffic, Policy Analysis will point you to the rule that’s causing the problem as well as when it was last changed, and by whom.