Author Archive

Help, Thief!

May 6, 2010

Yesterday we got robbed. Well, burgled.

Some guy entered our office, snatched four wallets and left unnoticed.

We finally realized what had happened when someone noticed his wallet was missing and, soon after, two other people as well.

I asked everyone in the office whether they had noticed anything exceptional during the day or had seen strangers in the office. However, it was tricky because we have many contractors here whom not everyone knows.

We made a list of all the people who had been seen entering the office that day, and quickly pinpointed a prime suspect. Funnily, it was a guy whom we actually met at the entrance to our office when we were going out to lunch and politely offered him assistance!

The surveillance videos confirmed our suspicions: the suspect entered using the front door code, visited some offices, and left within three minutes. Apparently he’s a semi-pro, carrying a large paper folder he pretends to read and wearing dark sunglasses in front of the cams.

24 hours later, a fourth person noticed his wallet was gone.

We’ll report it to the police today but I doubt they’ll catch him. We will also review our security procedures and update them.

So, what happened?

  1. The incident
  2. The victim realizes what happened
  3. An investigation
  4. Conclusions – we understand how the incident happened
  5. Lessons: enhance security

Does this remind you of something? It is so similar to IT security!

It’s funny, to re-read Ruvi’s recent interview at SC Magazine – it’s all about physical and IT security convergence.

I also found this useful paper by Verizon Business.


Business processes and Lego

April 22, 2010

As a boy I loved Lego. I’d use the red and green and white bricks that, in those days, came in just a few shapes to construct houses, ships, cars and stairways that lead nowhere. It was all about fun and imagination.

Last week, I was at the Check Point experience in London where I was demonstrating our workflow solution. It was a real delight meeting you out there and discussing our vision in light of your invaluable real-world experience (the bar was also not bad).

It was during the 1st day that I suddenly realized the analogy between our approach and Lego and how important it is in providing a good solution.

After a couple years of presenting our solution to security people from over one hundred organizations world-wide, I came to realize that there is no such thing as a standard process for managing changes to the security policy.

While one organization starts off with an access request which is then approved by a line manager, another may first want to design the change. Some want to allow requesters to specify the target firewalls while others keep them strictly within the domain of the firewall operations group. Not to mention the gazillion types of forms I have seen out there.

Of course, it would be easier if we could say “here’s how you should be working” and provide one ideal workflow but things just don’t work like that. Every organization has developed processes that match their needs and organizational structures and policies. Beyond technical constraints there are also social and political factors that have shaped these processes and they cannot be modified easily.

So instead of a single rigid process we chose to provide small building blocks that can be compiled into the organizational processes, things like:

  • Permissions and roles
  • Users and groups
  • Workflows that are composed of configurable steps
  • Forms that consist of fields such as input fields and drop down lists
  • Application flows (the requested access paths) that can change their appearance to match the needs of users with different roles
  • Dynamic yet controllable workflows so that users have flexibility within a fixed framework

This Lego approach makes our solution effective in a variety of environments with differing processes including ones we haven’t even seen or anticipated.

Now I’m doing real Lego again with my daughters but this time its princesses and castles instead of cars and ships. Yep, it’s all about fun and imagination.


Tufin Firewall Expert Tip #5: Nasty configuration mistakes: How to avoid them and how to recover

March 14, 2010

Ongoing changes to network and security device configuration are unavoidable and necessary for business. But they are also risky. They can have unexpected consequences – from service interruptions to performance degradation and even downtime.

How can you reduce the risk associated with configuration changes? Here is a 3-tier strategy:

1. Reduce the likelihood of configuration errors:

  • Monitor and review changes
  • Establish change procedures and processes
  • Establish a test plan for all changes

2. Detect problems as early as possible:

  • Monitor the environment
  • Listen to your users

3. Prepare for a fast recovery if something goes wrong:

  • Maintain accessible, actionable audit information
  • Establish standard recovery procedures

Finally, implementing solutions that can automate error-prone, repetitive tasks and can maintain vigilance 24 hours a day goes a long way to preventing, and recovering from, human configuration errors.

Monitor and review changes

Even if they look simple, all configuration changes should be monitored and reviewed. For example, suppose you’re adding a host to a network group in order to provide access, and you are unaware that the same group is used in a different place to block traffic. Another pair of eyes will often catch something you missed.

Establish change procedures and processes

Change requests must be communicated consistently so that the right people can review them and assess their impact. Many problems can be avoided simply with good communication. Some organizations schedule weekly change review meetings to understand and plan complex changes. But the most effective way to ensure that changes are reviewed and approved is by enforcing a change process workflow.

Establish a test plan for all changes

It may sound surprising, but many changes are tested for hours or days after implementation, while some are never tested at all. A test plan for every change is a critical part of the change process. Sometimes this isn’t as easy as it sounds and involves coordinating end users, business partners, and professional testers. The work you put in here will build credibility in your teams’ ability to get things right.

Monitor the environment

The firewall environment should be continuously monitored and abnormal behavior should automatically trigger alerts.  The firewall environment might include the operating system, the network interfaces, the firewall software, the firewall hardware, and the firewall rule base.  These should be analyzed and correlated and, if necessary, escalated for a closer look.

Listen to your users

A helpdesk should be in place so that users can easily report problems. The helpdesk should be manned with trained personnel and have clear processes for handling incidents. Have a plan for correlating multiple incidents to a single problem. Each team should have tools to assist root cause analysis before escalation to the next level.

Maintain accessible, actionable audit information

Each and every change must be documented properly and recorded in an audit trail. A comprehensive audit trail should include the target device, the exact time of the change, the configuration details, the people who were involved (requestor, approvers, implementor), and the change context such as the project or application.

But a detailed audit trail is not enough on its own. The information must also be presented in an easy-to-read format so that you can easily access it when needed. Additionally, you’ll want to have filtering and querying capabilities on top of the data to speed up searches and lookups.

Prepare for rapid recovery

Now comes the incident. Despite everything, something bad has happened and you need to respond. You will be judged by the time it took to recover, so you want to be well-prepared with tools, staff and processes to handle this. You want to keep stress down to a minimum.

If you have set up the procedures above, you are already in pretty good shape. Either you caught the problem during the change process or, if it was missed, you can discover it early, before users and services are affected. Thanks to the audit trail, you know exactly what changes have been made lately, by whom, and why. Experts agree that most recovery time is spent figuring out what changed; so if you already know, recovery times are going to be much shorter. You run some quick queries to pinpoint likely suspects and you can roll-back the changes quickly.

If you use Tufin solutions, there are a number of tools that can help you control changes, detect problems, and recover from errors:

  • A complete audit trail with full accountability and integration with ticketing systems
  • Comprehensive change reports and side-by-side diffs for rule bases, objects and textual configurations
  • Real-time change notifications with filtering (by change type, device, affected networks)
  • Central console for viewing all recent changes across all devices regardless of vendor and model
  • A policy analysis tool for determining which firewalls and rules are blocking services across an environment
  • Rule and object change history reports
  • Business process automation to manage the change process (SecureChange Workflow) and integration with existing ticketing systems

Let us know how you recover from configuration mistakes.


Security: Moving Beyond Firewall Configuration Management

February 15, 2010

Platen describes the ultimate network management suite with a comment from me:

Tufin Firewall Expert Tip #4: Vendor and model-specific tips for optimizing firewall performance

February 9, 2010

Do your firewalls need a tune-up? In our last column, we looked at general ways that firewall performance can be improved to overcome problems such as high CPU utilization, low throughput, and slow applications.

This time, we are following up with a list of vendor and model-specific tips and best practices that can help you to optimize your firewall infrastructure. Thank you to everybody who responded to the last column and sent in their own tips!

Check Point

  1. Use networks instead of address ranges in NAT.
  2. Avoid rules with Ident.
  3. Replace nested groups by flat groups.
  4. Be aware of configurations that SecureXL templates (fastpath) cannot handle, for example, security server, or syndefender.
  5. Note that SecureXL templates can be disabled from a certain rule onwards due to certain configurations such as client auth, time objects, etc.
  6. Be aware of configurations that SecureXL cannot handle, for example:
    • FloodGate-1 (automatically disables SecureXL)
    • Rules with user authentication
    • Services with a port number range (disables connection-rate acceleration)
    • Time object associated with the rule (disables connection-rate acceleration)
  7. Be aware of SmartDefense configurations that may impact performance:
    • Network Security–>Fingerprint scrambling–>ISN spoofing
    • Network Security–>Fingerprint scrambling –>TTL

Cisco all models

  1. Debug messages are known to affect performance.

PIX 6.3

  1. TCP Intercept is known to impact performance.
  2. If you are not using NAT and have no DNAT communications, disable the ILS fixup.

Cisco IOS Firewall

  1. Performance may be affected if the value of ‘ip inspect one−minute high’ is far greater than the value in the ‘show ip audit stat’ command.

Cisco ASA

  1. Verifying TCP checksums may impact firewall performance.
  2. Ideal performance is achieved when traffic enters and exits ports on the same adapter or ports on adapters serviced by the same I/O bridge (ASA 5580).

Cisco FWSM

  1. Deep packet inspection may cause high CPU (all inspection engines except for SMTP are handled in software).
  2. Before release 3.1, non UDP or TCP or ICMP flows are handled on a packet by packet basis. With 3.1 and higher, the FWSM creates flows in NP1 and NP2.
  3. Be aware of features that are not offloaded to network processors, they will use the CPU.
  4. Built-in ACL optimization algorithm: FWSM Release 4.0 incorporates an algorithm capable of optimizing ACLs by coalescing contiguous subnets referred to in different access-control entries into a single statement and detecting overlaps in port ranges. Note that after the optimization process, the ACL is likely to be different from the original one.

Juniper (ScreenOS)

  1. ALG (application layer gateway) is applied globally to all policies by default but may have a major impact on performance. Disabling it on specific policies can make a significant improvement.
  2. On high-end firewall platforms, NS-5000, ISG-1000 and ISG-2000, with ScreenOS 6.2 and above, Juniper switched the default rule search algorithm from “hardware” (ASIC) to “software” (CPU). The software search algorithm provides faster policy search time compared to older versions, when the number of “rules” for a pair zone is more than 500 rules, but it could cause high CPU during policy changes.
  3. ScreenOS 6.1: using wildcard address/wildcard policy causes a performance penalty.


  1. Enable only the required management features you need. If you don’t need SSH or SNMP, don’t enable them.
  2. Enable only the required application inspections.
  3. Minimize use of alert systems. If you export syslog, you may not need SNMP or email alerts.
  4. Establish auto-updates (scheduled update) at a reasonable rate. Every 4 or 5 hours should be ok on most cases.
  5. Minimize use of Protection Profiles. If you don’t need a Protection Profile on a firewall rule, don’t put it there.
  6. Minimize use of Virtual Domains and avoid them completely on low-end models.
  7. Avoid Traffic Shaping if you need maximum performance. By definition, Traffic Shaping slows down traffic.

As usual, I’d love to have your feedback,

Tufin Firewall Expert Tip #3: Best practices for optimizing firewall performance

January 11, 2010

Is your firewall overloaded? Symptoms include high CPU, low throughput and slow applications.  Before upgrading your hardware, it is worth checking whether the firewall configuration can be optimized.

Optimization techniques can be divided into two groups – general best practices, and vendor-specific, model-specific configurations. This column focuses on best practices. Next time, we will look at vendor-specific tips, so if you have any to share, we would like to hear from you.

Optimizing firewalls for better performance and throughput:

  1. Remove bad traffic and clean up the network.  Notify server administrators about servers hitting the firewall directly with outbound denied DNS/NTP/SMTP/HTTP(S) requests as well as dropped/rejected internal devices. The administrators should then reconfigure the servers not to send this type of unauthorized outbound traffic, thereby taking load off the firewall.
  2. Filtering unwanted traffic can be spread among firewalls and routers to balance the performance and effectiveness of the security policy.
    1. Identify the top inbound dropped requests that are candidates to move upstream to the router as ACL filters. This can be time consuming, but it is a good method for moving blocks upstream to the router and saving firewall CPU and memory.
    2. If you have an internal choke router inside your firewall, also consider moving common outbound traffic blocks to your choke routers, freeing more processing on your firewall.
  3. Remove unused rules and objects from the rule bases.
  4. Reduce rule base complexity – rule overlapping should be minimized.
  5. Create a rule to handle broadcast traffic (bootp, NBT, etc.) with no logging.
  6. Place the heavily used rules near the top of the rule base. Note that some firewalls (such as Cisco Pix, ASA version 7.0 and above, FWSM 4.0 and certain Juniper Networks models) don’t depend on rule order for performance since they use optimized algorithms to match packets.
  7. Avoid DNS objects requiring DNS lookup on all traffic.
  8. Your firewall interfaces should match your switch and/or router interfaces.  If your router is half duplex your firewall should be half duplex.  If your switch is 100 Mbit your firewall interface should be hard-set to match your switch; both should most likely be hard-set to 100 Mbit full duplex. Your switch and firewall should both report the same speed and duplex mode. If your switch is gigabit, your switch and firewall should both be set to auto-negotiate both speed and duplex.  If your gigabit interfaces do not match between your firewall and switch, you should try replacing the cables and patch panel ports. Gigabit interfaces that are not linking at 1000 Mbit full duplex are almost always a sign of other issues.
  9. Separate firewalls from VPNs to offload VPN traffic and processing.
  10. Offload UTM features from the firewall: AV, AntiSpam, IPS, URL scanning.
  11. Upgrade to the latest software version. As a rule of thumb, newer versions contain performance enhancements but also add new capabilities, so a performance gain is not guaranteed.

If you use Tufin SecureTrack, you can automate a number of these tasks.

Here are a few ways that SecureTrack can help:

  1. Identify unused rules and objects with the Rule and Object Usage Report, and consider removing them. The longer the reporting period, the more reliable the rule usage status will be. Remember that certain rules, like the ones allowing disaster recovery services, are only used rarely. You can also identify and cleanup unused group members.
  2. Analyze rule shadowing with Policy Analysis. Run Policy Analysis with “Any;Any;Any;Any” to identify completely shadowed rules. These rules are redundant and should be deleted. You can re-validate the redundancy with an unused rules report.
  3. Identify the most-used rules with the Rule and Object Usage Report and move them up in the rule base hierarchy. To find the top-most location for placing a rule without affecting connectivity, run an “Any;Any;Any;Any” policy analysis query, then, for each most-used rule:
    1. If it is not shadowed, move it to any higher location.
    2. If it is shadowed, find the lowest-ranked shadowing rule with a contradictory action and place the most-used rule below that one.
  4. Other things to keep in mind when re-ordering rules:
    1. You’ll probably want to preserve the rule base structure, for example, rule grouping by service or application, source or destination, projects etc.
    2. Be careful with policies containing rules with special actions such as authentication or encryption – shadowing becomes more tricky in this case.
  5. You may also use the Best Practices Rule Order Optimization test to quickly identify candidates for relocation.
  6. Use the Automatic Policy Generator (APG) to identify and remove unwanted traffic from the firewall. Read more about APG here.
  7. Use the “Software Version Compliance Report” to control your firewall software versions.

Last but not least, remember that optimization can have a price, too – beyond the time you’ve invested. If you are not careful, you can wind up with a rule base which is too hard to maintain.  If you have the budget, there are times when upgrading the hardware is the easiest alternative.

In our next column, we’ll focus on ways to optimize specific models of firewalls from the different vendors. If you have any tips that you would like to share, please contact me at

Tufin Firewall Expert Tip #2: Analyzing Network Connectivity Problems

November 10, 2009

Network connectivity problems are some of the most common – and aggravating – for business users. With distributed systems, as soon as an application does not behave as expected, the firewall is suspect. There are many other possible points of failure – the client application, the user’s PC, intermediate switches, routers, filters, load balancers and the application itself. But, because of its nature (secretive and designed to keep people out) the firewall is a prime suspect. As a firewall administrator, you are guilty until proven innocent.

So how can you quickly determine if the problem is due to the firewall or not?

One approach is to analyze the firewall traffic logs. Contact the user, obtain his IP address and ask him to access the application again. Ideally, this should trigger the connection in question. Then you can review the firewall traffic logs and locate the dropped or accepted packets. How easy this is depends on the tools – unless you have a smart log browser, you may have to work with syslogs.  Normally there will be a lot of logs so a filter on the source IP and, if possible, on the destination IP or port will make things easier.

Unfortunately, in many cases, you will find nothing. One possible reason is that the rule that allows or blocks the relevant traffic is not configured to generate logs. Another possibility is that you are not looking at the right firewall or are simply missing the relevant logs.

Another method is to analyze the firewall rule base. In many cases this is not feasible due to the size and complexity of the network and firewall policies. But if the network is relatively small and you know the rule base very well, you may be able to narrow the problem down to a specific rule or to a recent change that might have affected the application flow.

If you have Tufin SecureTrack, you can use the Policy Analysis tool to query the rule base. Get the user’s IP address, the IP address of the application and the service or port, if possible. Log into SecureTrack and create a policy analysis query with these inputs. You can run the query on all firewalls or, if you are sure which ones are relevant, on a subset. For the report, you can choose to show all traffic or only dropped or accepted traffic.  SecureTrack does not send any packets over the network. It analyzes its own copy of the rule base, which is always up to date from continuous monitoring. Since SecureTrack does not depend on traffic logs, it doesn’t matter whether log data is missing or unavailable.

Policy Analysis will quickly determine whether the firewalls are allowing the user’s traffic or not. If it turns out that the firewall is, in fact, blocking traffic, Policy Analysis will point you to the rule that’s causing the problem as well as when it was last changed, and by whom.

Tufin Firewall Expert Tip #1: Relocating a Server

September 22, 2009

An IT group needs to move a server and you need to update the firewall policy. The question is this: Which rules need to be changed? What is the fastest and safest way to get the job done?

You have two choices on where to start: the firewall rule base or the traffic logs. The rule base has one obvious advantage – it’s smaller. But it may be difficult to figure out which rules are relevant. Often the server will not be referenced explicitly, so you need to manually check every rule. This could be a tricky task, particularly if your rule base has been growing for a few years and has had multiple administrators. Rule shadowing complicates things even more.

The other alternative is reviewing firewall traffic logs. By identifying the traffic to and from the server over a reasonable period of time, you can create a new rule base. You do not need to worry about shadowing and potentially, this method will enable you to build a very accurate policy that does not include a lot of overly permissive rules. The downside is the large number of logs that must be analyzed. How far do you need to go back to make sure you’ve covered all legitimate use? One month? Three months? A year? What about your disaster recovery hot site traffic? Remember, you won’t have any logs for that unless you have tested your failover.

Whichever way you go, here are some things to look out for:

  • Creating over-permissive rules. Though they provide a quick fix, hackers may use them as attack vectors later on.
  • Access that is currently allowed and not really required.
  • Implicit connectivity that seems superfluous but could actually be essential for business continuity.

If you have Tufin SecureTrack™, try using the Automatic Policy Generator (APG) to prepare for server relocation. Run APG on firewall logs from or to the server going back a year. It will generate a set of rules that allows the actual server access that took place during that period.

You should review it to make sure there are no rules that stem from malicious traffic, such as a port scan, (even if it runs slowly over several days), a conficker virus or a generic botnet.Change the server’s address to the new one and implement the new rules on relevant firewalls above any potentially blocking rules – you can consult with SecureTrack’s Policy Analysis tool or the Policy Change Advisor in SecureChange™ Workflow. After the relocation is done, you can use the rule and object usage report to clean up the obsolete access rules.

Automatic Policy Generation with Permissive Rule Analysis

June 29, 2009

One of my favourite activities, as CTO and founder, is meeting our users and talking to them about their needs and wishes in the areas of firewall policy management and beyond. It’s always nice to hear how SecureTrack is helping out and what our users like about it, it’s also useful to hear about things that don’t work and need improvement, but what I’m really after is requirements that lead to innovation.

A couple of years ago I was visiting one of our users, a mobile operator, and we were discussing a requirement they had to remove unused objects from the policy. At the time SecureTrack already provided rule usage analysis and this new requirement gave me the idea of mapping traffic logs onto objects within rules. This solution seemed right; unused objects would appear with zero hits and could then be safely deleted from the rule. The user also mentioned something about analyzing rules that are too wide which I kept in mind but didn’t have the bandwidth to deal with.

Anyway, we started brainstorming the object usage analysis requirement and came upon an interesting scenario:

Source Destination Service Action
HTTP Accept

Assuming all objects are used, can this rule be improved?

If you like puzzles stop here and think. Get on with it…