论坛公告:应用容器安全指南(SP800-190)中文版   美国政府宣布禁用卡巴斯基软件   《中华人民共和国网络安全法》讨论帖   新手报到专用帖   【论坛公告】关于本站广告贴泛滥问题的整理通知   

当前时区为 UTC + 8 小时


发表新帖 回复这个主题  [ 45 篇帖子 ]  前往页数 1, 2, 3  下一页
作者 内容
 文章标题 : 绿盟科技冰之眼网络入侵保护系统浅析
帖子发表于 : 2007-01-30 16:55 
离线
新手

注册: 2004-07-15 16:13
最近: 2014-08-04 18:29
拥有: 101.00 安全币

奖励: 0 安全币
在线: 1327 点
帖子: 6
型号:NIPS-200
引擎版本:5.0.0.15
规则库版本:5.0.0.40


优点:
1. 同时支持B/S和C/S两种管理方式,界面非常清晰、友好,甚至不需要看手册就可以玩转,而且还有配置向导。超级终端界面有中英文两种,界面也很简明。
2. 工作模式比较灵活:可以整个充当IDS;也可以整个充当IPS;还可以部分端口作为IDS,另一部分端口作为IPS;还可以设置为TAP或BYPASS工作模式。(听起来很好,但实际上纯属噱头)。
3. 内置防火墙功能,但简单得不能再简单了,甚至还不如宽带路由器自带的防火墙功能,聊胜于无。

缺点:
1. 每次添加、修改、删除规则都需要重启引擎才能生效,重启引擎需要1分零几秒!而这段时间设备是bypass直通的,没有任何防护,这是非常严重的一个问题!每次重启引擎后都要重新登录。
2. 事件库升级后也要重启引擎,甚至任何设置更改都要重启引擎!
3. 引擎更新后要重启系统,工作模式切换也要重启系统。重启系统时间也是1分零几秒。重启系统时网络是断的,并没有宣传资料中所说的硬件bypass功能(或许并不是所有型号都有,但即使这样,界面中也应该屏蔽相应的功能菜单才对)。
4. 不论是防火墙规则还是IPS规则,都没有内外方向之分;时间定义不够全面,只有循环时间定义,无绝对时间定义。
5. IPS的事件与IDS事件要求不一样,IPS事件绝对要精准,这样才不会因为误报而导致用户的正常网络应用中断!但绿盟的IPS事件似乎就是把原来的IDS事件简单拿过来,这一点可以从它的默认设置中97%以上的事件都不阻断(因为不敢确定而不敢阻断)看出。
a) 23种蠕虫攻击(默认阻断的1种)
b) 88种P2P/IM/网游/视频(默认阻断的34种),其中很多是充数的,例如:
即时通信软件Yahoo Messenger解析服务器地址
即时通信软件Yahoo Messenger解析服务器地址
即时通信软件Yahoo Pager解析升级站点地址
即时通信软件ICQ用户状态改变为离开
即时通信软件ICQ用户下线
即时通信软件ICQ用户发送消息
即时通信软件ICQ用户接收消息
即时通信软件Yahoo Messenger用户状态改变为离开
即时通信软件Yahoo Messenger用户下线
即时通信软件Yahoo Messenger用户发送消息
即时通信软件Yahoo Messenger用户接收消息
即时通信软件MSN用户联系人状态改变为离开
即时通信软件MSN用户状态改变为离开
即时通信软件MSN传送文件失败
即时通信软件MSN传送文件成功
即时通信软件MSN用户下
c) 75种网络病毒(默认都不阻断),绝大部分为病毒邮件
d) 113种连接事件(默认都不阻断)
e) 1380种攻击事件(默认阻断的12种)
6. WEB界面无在线升级功能(GUI界面似乎有),如何保证攻击事件库总能及时得到更新?而且人为到升级网站上下载升级文件时,隔一段时间连接就会被服务器重置,要反复续传多次才能下载下来。
7. 性能很差:
a) 不经过IPS ping网关,时延约为1ms;再经过IPS ping网关,时延则延迟到12~20ms
b) 不经过IPS下载文件(FTP和HTTP),速率约为4M(3798.47K)左右;经过IPS下载文件(FTP和HTTP),速率就下降到400K(373.65k)左右。性能下降了90%多!即使把所有的规则库都不启用,FTP和HTTP下载性能仍然是下降到原来的10%。
8. 虽然宣称可以同时作为IDS和IPS来使用,但在界面中IDS和IPS并没有区分开,而且其IPS性能本来就已经低得很可怜了,如果再作为IDS处理更大的流量,如何能够?
9. WEB界面的日志报表功能极弱,GUI界面没有测试。
10. 只能通过带外管理,既是优点,也是缺点。
11. 系统参数及规则文件暴漏
12. 只有使用和阻断两个选项,只有一套策略集,没有虚拟系统一说
13. 界面上虽然显示有用户自定义事件功能,但实际上无,只在<快捷保护>界面下提供了简单的自定义功能。

总结:
绿盟的IPS看起来其实更像是一个UTM,但相对于真正的UTM来说又相差甚远,如果与UTM竞争,会被打的很惨很惨!但定位成IPS则不同了,目前国内几乎还没有可以与之竞争的IPS产品,这应该是一个比较明智的市场策略。


--------本帖迄今已累计获得15安全币用户奖励--------


回到顶部
 奖励本帖 用户资料  
 
 文章标题 :
帖子发表于 : 2007-01-31 14:03 
离线
初级用户

注册: 2005-09-18 21:35
最近: 2010-11-08 21:55
拥有: 76.90 安全币

奖励: 0 安全币
在线: 1242 点
帖子: 24
这样的帖子建议也可以多一些。除了理论上的探讨之外,还可以看看实际的应用中还有哪些不足,或许可以通过什么方式来弥补.....总之,可以给大家一个参考、一个启发


回到顶部
 奖励本帖 用户资料  
 
 文章标题 :
帖子发表于 : 2007-02-03 22:46 
离线
初级用户

注册: 2006-11-27 10:19
最近: 2013-01-14 10:42
拥有: 131.20 安全币

奖励: 0 安全币
在线: 747 点
帖子: 28
经常听到一些客户询问绿盟的IPS性能的事情,自己感觉绿盟在IPS上宣传效果大于实际功能,但由于一直没有机会实际使用,所以不好说。今天,终于看到使用效果了,解决了自己一个问题。谢谢。


回到顶部
 奖励本帖 用户资料  
 
 文章标题 :
帖子发表于 : 2007-02-05 10:07 
离线
新手

注册: 2007-02-05 10:00
最近: 2011-12-31 17:48
拥有: 1.00 安全币

奖励: 0 安全币
在线: 125 点
帖子: 3
好像测试的是老版本V5.0,新版本是V5.5,期望楼主有新的使用报告!

http://www.nsfocus.com/news/200611/143.html

绿盟科技举办新闻发布会 宣布“冰之眼”两款产品全新升级

11月10 日,绿盟科技在公司产品展示中心隆重举办“冰之眼网络入侵检测/保护系统V5.5升级版新闻发布会”,正式宣布旗下的“冰之眼”两款产品全新升级。


回到顶部
 奖励本帖 用户资料  
 
 文章标题 :
帖子发表于 : 2007-02-13 11:56 
离线
初级用户

注册: 2004-12-19 13:29
最近: 2012-05-23 09:11
拥有: 401.40 安全币

奖励: 0 安全币
在线: 1610 点
帖子: 43
楼主对相关问题的分析,有的并不准确,可能与配置操作不当和分析问题出发角度有关。 :)
举个例子来说, 修改策略执行、规则库升级等情况下,引擎相关进程重启是正常的,国内外大多数网关类产品都不可避免,关键在于重启时间,而且冰之眼IPS好像在一秒左右,而不是一分钟。
绿盟的IPS现在是V5.5版本,5.0早在去年10月就停用了。
说句公道话,据我了解从2005年9月绿盟推出IPS至今,国内仍然只有这么一款自主知识产权的IPS产品,其他大多数产品都是国外或OEM国外的产品(可能不太准确),打破了国外IPS产品在国内的垄断。这些工作其实因该更多的被赞赏和鼓励,而不是挖苦和讽刺。 :)


--------本帖迄今已累计获得15安全币用户奖励--------


回到顶部
 奖励本帖 用户资料  
 
 文章标题 :
帖子发表于 : 2007-02-23 19:12 
离线
初级用户

注册: 2006-10-04 22:07
最近: 2014-11-18 23:26
拥有: 197.60 安全币

奖励: 20 安全币
在线: 1202 点
帖子: 51
不是赞同楼上的观点,没有看出楼主有什么挖苦的意味。
即使现在升级到5.5版本,也不能说明它解决了楼主所说的问题。
楼主说的缺点中 4到13条,不知道楼上如何解释。
"国内仍然只有这么一款自主知识产权的IPS产品.....打破了国外IPS产品在国内的垄断" 更多的赞扬,也充满了期待。当然也希望它能做得更专业,在国内做老大没什么,在国际上做老大才是专业。让各位见笑了 :)


--------本帖迄今已累计获得3安全币用户奖励--------


回到顶部
 奖励本帖 用户资料  
 
 文章标题 :
帖子发表于 : 2007-02-25 21:58 
离线
超级用户

注册: 2003-12-17 16:05
最近: 2014-05-20 18:03
拥有: 2,751.30 安全币

奖励: 487 安全币
在线: 5011 点
帖子: 320
看了上述讨论,忍不住说几句:


"2. 工作模式比较灵活:可以整个充当IDS;也可以整个充当IPS;还可以部分端口作为IDS,另一部分端口作为IPS;还可以设置为TAP或BYPASS工作模式。(听起来很好,但实际上纯属噱头)。 "

因为IPS是串接在网络上的,当你在调试系统时候怀疑可能是IPS导致系统不通或不正常,直接将IPS设置成TAP模式,不需要重新接网线就可以进行诊断,其实是很好的一个功能.



"绿盟的IPS看起来其实更像是一个UTM,但相对于真正的UTM来说又相差甚远,如果与UTM竞争,会被打的很惨很惨!但定位成IPS则不同了,目前国内几乎还没有可以与之竞争的IPS产品,这应该是一个比较明智的市场策略。"

我不这么认为,如果研究一下MCAFEE,和ISS的类似产品和IPS市场,再分析一下绿盟的产品,就不会得出这个结论,比如就拿防火墙功能来说,MCAFEE的IPS就具备,也具备对蠕虫和网络型病毒的识别和处理功能.

绿盟的IPS产品可能也参照了上述几家的特点,但有些功能是上述有些厂商不具备或不完善的,如基于时间,协议,地址的带宽管理功能,可能开发的时候也比较多结合国内的网络环境和应用环境吧.


个人认为,目前市场上的IPS有以下几条线:

1.专业IPS:在漏洞和安全攻防技术方面有研究实力的公司开发的专业IPS,如ISS,MCAFEE,3COM的TP(国内华3在推),国内的绿盟开发的IPS.
上述几家可称为目前主流的IPS厂商

2.在防火墙或网络方面有实力的IT巨头推的IPS,如NETSCREEN的IDP系列产品cisco的IPS.

3.UTM产品,如sonicwall,Fortigate等产品,国内也有启明,安氏在推相关产品


专业IPS和UTM的技术区别和市场定位我就不说了,仁者见仁.

人感觉目前国内IPS市场的竞争,主要集中在几家主流IPS厂商之间,因为目现阶段主要是在一批对新技术和产品感兴趣的专业用户在接纳IPS,他们更对主流安全厂商的专业IPS感兴趣.


--------本帖迄今已累计获得15安全币用户奖励--------


回到顶部
 奖励本帖 用户资料  
 
 文章标题 :
帖子发表于 : 2007-02-26 23:28 
离线
初级用户

注册: 2006-03-04 16:09
最近: 2012-01-31 23:07
拥有: 374.10 安全币

奖励: 2 安全币
在线: 1398 点
帖子: 54
看了上述讨论,忍不住说几句:
把绿盟和三大IPS厂商(McAfee,TP,ISS)比实在没啥意思,根本不在一个水平线上。

看看三大能够在每个月的微软补丁日同天推出签名,那才是真正的“虚拟补丁”,看看他们的签名中有多少是针对没有补丁的0-day。再看看绿盟的签名库,简直可以笑掉大牙。特别是1月份的更新,竟然出了3个关于熊猫烧香的签名;真是哗众取宠。要知道根据熊猫烧香的传播原理,真正利用MS06-014漏洞攻击的Exploit 绿盟却是不认!

最大吞吐量、端口密集度、并发连接数、半开SYN的连接数等等。
这方面绿盟都是落后的,主要原因是绿盟“硬”不起来!

楼上说的QoS功能,其实TP早就有了,McAfee也在4.1里有了。至于说McAfee的防火墙功能,人家白皮书上明确说是“内部防火墙”,说穿了就一ACL,只不过界面功能、日志比三层交换、路由器上设起来容易多了。并非状态检测防火墙。

另外,听说绿盟前些天又要......

IPS市场,国际上Sourcefire还是不错的。

另外,和漏洞评估产品结合、联动是趋势。


--------本帖迄今已累计获得23安全币用户奖励--------


回到顶部
 奖励本帖 用户资料  
 
 文章标题 :
帖子发表于 : 2007-03-16 11:02 
离线
新手

注册: 2004-07-15 16:13
最近: 2014-08-04 18:29
拥有: 101.00 安全币

奖励: 0 安全币
在线: 1327 点
帖子: 6
flankxiao 写道:
楼主对相关问题的分析,有的并不准确,可能与配置操作不当和分析问题出发角度有关。 :)
举个例子来说, 修改策略执行、规则库升级等情况下,引擎相关进程重启是正常的,国内外大多数网关类产品都不可避免,关键在于重启时间,而且冰之眼IPS好像在一秒左右,而不是一分钟。
绿盟的IPS现在是V5.5版本,5.0早在去年10月就停用了。
说句公道话,据我了解从2005年9月绿盟推出IPS至今,国内仍然只有这么一款自主知识产权的IPS产品,其他大多数产品都是国外或OEM国外的产品(可能不太准确),打破了国外IPS产品在国内的垄断。这些工作其实因该更多的被赞赏和鼓励,而不是挖苦和讽刺。 :)


或许像你所说的,IPS引擎重启时间是一秒,但我测的是整个设备的bypass时间,这两个时间是不一样的。我反复测过多次,引擎重启时确实是有一分多钟的时间处于直通状态!建议从bypass机制本身找找问题。
另外,我也听说5.5版本有了很大的改进,期望能体验一下。


--------本帖迄今已累计获得16安全币用户奖励--------


回到顶部
 奖励本帖 用户资料  
 
 文章标题 :
帖子发表于 : 2007-03-20 22:44 
离线
新手

注册: 2004-11-16 23:26
最近: 2008-02-27 00:27
拥有: 0.00 安全币

奖励: 0 安全币
在线: 1267 点
帖子: 7
看了帖子收获不少,目前看来冰眼存在许多问题,推荐用户的时候得慎重。


回到顶部
 奖励本帖 用户资料  
 
 文章标题 : Re: 绿盟科技冰之眼网络入侵保护系统浅析
帖子发表于 : 2007-04-12 11:39 
离线
新手

注册: 2007-04-12 11:19
最近: 2007-04-13 12:15
拥有: 63.00 安全币

奖励: 0 安全币
在线: 31 点
帖子: 3
huwy2003 写道:
型号:NIPS-200
引擎版本:5.0.0.15
规则库版本:5.0.0.40
……
绿盟的IPS看起来其实更像是一个UTM,但相对于真正的UTM来说又相差甚远,如果与UTM竞争,会被打的很惨很惨!但定位成IPS则不同了,目前国内几乎还没有可以与之竞争的IPS产品,这应该是一个比较明智的市场策略。


单位上正在测试IPS,目前测过的有TP,天融信,绿盟。
就现在测的绿盟,和楼主说的情况做个比较:

固件版本 5.5.0.24
WEB管理系统版本 5.5.0.13
引擎版本 5.5.0.43
引擎最后升级时间 2007-2-5
规则库版本 5.0.0.47
规则库最后升级时间 2007-04-02

优点:1、2、3条感受和楼主基本相同,就不说了。
缺点:
1、改规则就要重启的确很麻烦,相比之下TP就没这个问题。不过重启时间在3~5秒的样子,而不是楼主写的1分钟。
2、规则上也没内外之分,厂商技术的解释是既然都是威胁,内外自然是做同样的处理,感觉比较牵强。时间的定义从我实际应用的角度上说,不认为是太大的问题。
3、事件上应该很大一部份是从IDS那边抓过来的。不知道不用IDS抓包的话,事件会怎么显示,抽空我会试一下。感觉误报还是存在,但没有太大的影响。虽然号称可以监测内网行为,但仍然只有从内到外的事件显示。
4、部分规则没有实际意义,比如事件中常显示的MSN、QQ上下线之类。有趣的是可以显示哪台机器登陆的QQ号是多少,单位有美女的时候很有用哦……
5、性能上和楼主说的差别就大了,装IPS前单位网络一直不稳定,因为网内下载很多,PING网关和DNS基本上在100MS左右。装IPS后PING网关稳定在1MS,DNS在2~8MS。可以很好的阻断P2P下载,但好像不能控制流量的大小。
6、不能在线升级,这好像是个问题。

总体上说,感觉冰之眼作为IPS表现还算不错,和TP相比各有千秋。天融信的设备测试的时候出现了几次死机,表现不怎么好。因为昨天才送过来,所以具体功能还不怎么熟悉,有疑问的可以回帖,我尽量帮大家一起测了,嘿嘿。


--------本帖迄今已累计获得18安全币用户奖励--------


回到顶部
 奖励本帖 用户资料  
 
 文章标题 :
帖子发表于 : 2007-04-12 15:55 
离线
初级用户

注册: 2006-03-04 16:09
最近: 2012-01-31 23:07
拥有: 374.10 安全币

奖励: 2 安全币
在线: 1398 点
帖子: 54
看了楼上几位的跟贴,我来谈谈我的看法:

首先楼上把TP拉进来和绿盟、TopSec比就有点问题。
国际上TippingPoint(3com)、McAfee、ISS(IBM)属于第一集团,此外在最新的Gartner的Leader组里又加入了Juniper和Sourcefire。Juniper能进第一集团是因为Symantec的扶持,但他的重点还是在防火墙方面,IPS(IDP)产品线没有明显变化。Sourcefire则更多是因为Snort社区的支持。

绿盟无论怎么和这些公司比起来都是差太多了。
天融信据我了解是没有IPS的,很有可能就是拿台UTM过来。

我不清楚你选的是多少流量的级别的IPS。一般在百M的数量级,国内还能跟国外竞争,价格优势很大,上千兆项目的话,还是谨慎为好。

策略不能区分内外是很要命的!对某些时候,往往需要只阻挡一个方向的流量。国外的产品有的有虚拟IPS,可以根据IP地址(段)和VLAN ID细化策略能够帮助你减少误报。同时根据不同的主机应用不同的策略。

对IDS来说误报还是可以忍受的,对IPS来说,误报是必须尽可能避免的!!


--------本帖迄今已累计获得16安全币用户奖励--------


回到顶部
 奖励本帖 用户资料  
 
 文章标题 :
帖子发表于 : 2007-04-12 16:46 
离线
初级用户

注册: 2006-04-25 00:06
最近: 2014-01-16 14:48
拥有: 190.10 安全币

奖励: 0 安全币
在线: 1049 点
帖子: 23
绿盟的IPS基本上都是IDS升级而成,基本上存在一定的问题。

其实IDS和IPS基本上都占据了不同的市场,虽然现在IPS还不是太成熟。
好像一些小企业,基本上他们需要IDS的机会不多,但是他们需要IPS相对就大了很多。因为他们除了发现问题以外,他们还需要拦截。一些银行就需要IPS和IDS,他们除了需要拦截以外还需要检测,还有一些,例如电信,他们对骨干网络是需要检测,但是就不需要拦截。所以我觉得IDS和IPS是可以同时并存。


--------本帖迄今已累计获得16安全币用户奖励--------


回到顶部
 奖励本帖 用户资料  
 
 文章标题 :
帖子发表于 : 2007-04-13 09:17 
离线
初级用户

注册: 2004-02-17 16:51
最近: 2013-05-22 17:36
拥有: 430.00 安全币

奖励: 0 安全币
在线: 2261 点
帖子: 58
3eyes 写道:
看了楼上几位的跟贴,我来谈谈我的看法:

首先楼上把TP拉进来和绿盟、TopSec比就有点问题。
国际上TippingPoint(3com)、McAfee、ISS(IBM)属于第一集团,此外在最新的Gartner的Leader组里又加入了Juniper和Sourcefire。Juniper能进第一集团是因为Symantec的扶持,但他的重点还是在防火墙方面,IPS(IDP)产品线没有明显变化。Sourcefire则更多是因为Snort社区的支持。

绿盟无论怎么和这些公司比起来都是差太多了。
天融信据我了解是没有IPS的,很有可能就是拿台UTM过来。

我不清楚你选的是多少流量的级别的IPS。一般在百M的数量级,国内还能跟国外竞争,价格优势很大,上千兆项目的话,还是谨慎为好。

策略不能区分内外是很要命的!对某些时候,往往需要只阻挡一个方向的流量。国外的产品有的有虚拟IPS,可以根据IP地址(段)和VLAN ID细化策略能够帮助你减少误报。同时根据不同的主机应用不同的策略。

对IDS来说误报还是可以忍受的,对IPS来说,误报是必须尽可能避免的!!


事先声明,以下说法只是讨论技术,对事不对人。

什么事情都是变化的,而且国外的产品也没有你说的那么牛,国内现在的差距主要在硬件上,软件包括规则这东西至少是平手甚至是超过国外的。认真去看看Network World 2006年9月份的测试报告,你就知道IPS这东西现在没有一家敢说自己比别人牛多少,感觉你只是说国外东西好,那么好在哪些地方呢?是否能够详细说明一下具体的指标呢,我们承认和国外有差距,那差距在什么地方呢。一味贬低国内的产品不得不让我怀疑你的动机,国内产品有国内产品的优势。不想说太多,看看国外的测试报告吧,尤其要认真看看其中图片。中文链接是http://newtest.cnw.cn/test/testdoc/yejie/htm2006/20061010_17620.htm
英文的好像要注册才能看,就给搬过来了。
天融信没有IPS,你的消息也太不灵通了吧,都搞了快大半年了,所以开始让我怀疑你是不是搞安全的,公开发布消息都有一周了。http://www.topsec.com.cn/news/show.asp?NewsID=565


最后由 grantming 编辑于 2007-04-13 09:39,总共编辑了 5 次

回到顶部
 奖励本帖 用户资料  
 
 文章标题 : PS performance tests show products must slow down for safety
帖子发表于 : 2007-04-13 09:19 
离线
初级用户

注册: 2004-02-17 16:51
最近: 2013-05-22 17:36
拥有: 430.00 安全币

奖励: 0 安全币
在线: 2261 点
帖子: 58
The IPS test that David Newman and I did has just been published. It's
a (if you don't mind me saying so) amazingly good performance test, and
we also have some usability comments as well as completeness and
correctness. The story package itself is quite large, but the starting
point is at:

http://www.networkworld.com/reviews/200 ... -test.html

There's the big performance test with great graphs & tables, and:
- a video of the testing
- usability testing report on IPS consoles
- a discussion of how IPS devices fell down with part of our testing
(SNMP is just a bit too exotic of a protocol, evidentally, and Cisco is
just too exotic and unusual of a vendor)
- where we saw problems in the coverage of the IPSes
plus little "mini-reviews" of the 6 products participating.

You have to register to read on; my apologies, but if you want to just
pretend to be me (there is no password) then feel free.

When a review like this comes out, the first 20 or 30 feedbacks we
always get are "why didn't you include vendor ?" The answer in this
case for any vendor of significance (Sourcefire, Juniper, Cisco,
ISS, the usual gang of tier 1 players) is "they didn't want to come
play." You can read whatever you want into that, but you'll see our
speculation on the issue in the discussion of coverage problems we saw.

another report:
http://www.networkworld.com/reviews/200 ... tml?page=1



IPS performance tests show products must slow down for safety
Results indicate high performance doesn't always mean high security.

Clear Choice Tests By David Newman and Network World Lab Alliance, Network World, 09/11/06

High-end intrusion-prevention systems (IPS) move traffic at multigigabit rates and keep exploits out of the enterprise. The problem is they might not do both at the same time.

In lab tests of top-of-the-line IPS systems from six vendors - Ambiron TrustWave (formerly Lucid Security), Demarc Threat Protection Solutions, Fortinet, NFR Security; TippingPoint, a 3Com company; and Top Layer Networks - we encountered numerous trade-offs between performance and security.
View from the lab: Go inside the IPS test bed at Newman's lab
Full report of usability testing
Why no product stopped Cisco exploit
Downsides of IPS coverage
Reviews of 6 products: Ambiron | Demarc | Fortinet | NFR | TippingPoint | Top Layer
How we tested IPS systems
Archive of Network World tests
Subscribe to the Network Product Test Results newsletter

Several devices we tested offered line-rate throughput and impressively low latency, but also leaked exploit traffic at these high rates. With other devices, we saw rates drop to zero as IPS systems struggled to fend off attacks.

In our initial round of testing, all IPS systems missed at least one variant of an exploit we expected they'd easily catch - one that causes vulnerable Cisco routers and switches to reboot. While most vendors plugged the hole by our second or third rounds of testing (and 3Com's TippingPoint 5000E spotted all but the most obscure version the first time out), we were surprised that so many vendors missed this simple, well-publicized and potentially devastating attack (see Can anyone stop this exploit?).

These issues make it difficult to pick a winner this time around (see link to NetResults graphic, below). If high performance is the most important criterion in choosing an IPS, the TippingPoint 5000E and Top Layer Networks' IPS 5500 are the clear leaders. They were the fastest boxes on the test bed, posting throughput and latency results more commonly seen in Ethernet switches than in IPS systems.

Page 2 of 9

Of course, performance isn't the only criterion for these products. The 5000E leaked a small amount of exploit traffic, not only in initial tests but also in two subsequent retests. TippingPoint issued a patch for this behavior two weeks ago. The 5000E also disabled logging in some tests. That's not necessarily a bad thing (indeed, TippingPoint says customers prefer a no-logging option to a complete shutdown), but other devices in the same test kept logging at slower rates.

The IPS 5500 scored well in tests involving TCP traffic, but it too leaked small amounts of exploit traffic. Top Layer attributed this to its having misconfigured the firewall policy for this test.

IPS systems from Demarc and NFR Security use sensor hardware from the same third-party supplier, Bivio Networks. The relatively modest performance results from both IPS systems in some tests might be caused by configuration settings on the sensor hardware, something both vendors discovered only after we'd wrapped up testing. On the plus side, both IPS systems stopped all attacks in our final round of testing.

Ambiron TrustWave and Demarc built their ipAngel-2500 and Sentarus IPS software around the open source Snort engine. The performance differences between them can be attributed to software and driver decisions made by the respective vendors.

Fortinet's FortiGate-3600 posted decent results in baseline tests involving benign traffic only, but forwarding rates fell and response times rose as we ratcheted up attack rates.

We should note that this is a test of IPS performance, not security. This is a test of IPS performance, not security. We didn't measure how many different exploits an IPS can repel, or how well. And we're not implying that just because an IPS is fast, it's secure.

Page 3 of 9

Even so, security issues kept cropping up. As noted, no device passed initial testing without missing at least one exploit, disabling logging and/or going into a "fail open" mode where all traffic (good and bad) gets forwarded.

This has serious implications for IPS systems on production networks. Retesting isn't possible in the real world; attackers don't make appointments. Also, we used a laughably small number of exploits - just three in all - and offered them at rates never exceeding 16% of each system's maximum packet-per-second capacity. That we saw security issues at all came as a surprise.

The three exploits are all well known: SQL Slammer, the Witty worm and a Cisco malformed SNMP vulnerability. We chose these three because they're all widely publicized, they've been around awhile, and they're based on User Datagram Protocol (UDP), allowing us detailed control over attack rates using the Spirent ThreatEx vulnerability assessment tool.

The IPS sensors we tested sit in line between other network devices, bridging and monitoring traffic between two or more Gigabit Ethernet ports. Given their inline placement, the ability to monitor traffic at high rates - even as fast as line rate - is critical. Accordingly, we designed our tests to determine throughput, latency and HTTP response time. We used TCP and UDP test traffic, and found significant differences in the ways IPS systems handle the two protocols (see How we tested IPS systems).

Vendors submitted IPS systems with varying port densities. FortiGate-3600 has a single pair of Gigabit Ethernet interfaces, while IPS 5500 has two pairs. The IPS systems from Ambiron TrustWave, Demarc, NFR and TippingPoint offer four port-pairs. To ensure apples-to-apples comparisons across all the products, we tested three times, using one, two and four pairs of ports where we could.

Page 4 of 9
One port-pair

Our tests of single port-pairs are the only ones where all vendors were able to participate.

In baseline TCP performance tests (benign traffic only, no attacks), the Demarc, TippingPoint and Top Layer devices moved traffic at 959Mbps, near the maximum possible rate of around 965Mbps (see link to The IPS torture test, scenario 1, below). With 1,500 users simultaneously contending for bandwidth and TCP's built-in rate control ensuring fairness among users, this is about as close to line rate as it gets with TCP traffic.

Click to see: The IPS torture test: scenario 1
The IPS torture test: scenario 1
Vendors submitted IPSs with varying port densities. To ensure apples-to-apples comparisons across all products, we tested three times, using one, two, and four pairs of ports where we could. If no results are listed for a vendor in a particular test scenario, that is because the vendor did not supply that configuration. Because TCP comprises 95% of the Internet's backbone traffic, we emphasized the effects of attacks on TCP traffic in our tests. However, we also conducted tests with pure User Datagram Protocol (UDP) traffic, because that protocol is used by VoIP, streaming media, instant messaging, and peer-to-peer applications. Footnotes in red indicate there was a security issue associated with that result. Footnotes in blue indicate there was a logging issue associated with that result.
Scenario No. 1: testing with one port pair across all vendors
Throughput (Mbps) Perfect device Ambiron TrustWave Demarc Fortinet NFR TippingPoint Top Layer
TCP baseline 965 672 959 937 382 959 959
TCP plus 1% attack 965 929 924

928
358 959 959 [1]
TCP plus 4% attack 965 929 799 821 308 959 [2] 954 [3]
TCP plus 16% attack 965 868 216 453 158 317 [4] 911 [5]
UDP baseline, 64-byte frames 1,524 41 144 127 1,223 1,235 624
UDP baseline, 512-byte frames 1,925 301 1,925 1,005 1,925 1,925 1,925
UDP baseline, 1518-byte frames 1,974 628 1,960 1,974 1,974 1,974 1,974
Latency (millisec) Perfect device Ambiron TrustWave Demarc Fortinet NFR TippingPoint Top Layer
TCP baseline N/A 372.11 430.50 326.43 144.05 399.50 447.02
TCP plus 1% attack traffic N/A 262.50 397.68 326.68 158.30 398.05 418.25 [1]
TCP plus 4% attack traffic N/A 252.82 409.05 1,272.95 192.52 393.16 [2] 368.25 [3]
TCP plus 16% attack traffic N/A 325.70 15,607.59 2,865.32 11,522.86 8,170.68 [4] 375.61 [5]
UDP baseline N/A 0.14 1.50 0.43 0.08 0.07 1.46
UDP plus 1% attack traffic N/A 0.12 259.12 17.36 7.59 1.40 5.34 [6]
UDP plus 4% attack traffic N/A 0.12 404.65 4.31 6.85 11.53 [7] 8.43 [8]
UDP plus 16% attack traffic N/A 0.15 648.71 12.96 6.45 13.54 [9] 5.55 [10]
Footnotes: [1] Forwarded 86 Witty exploits; [2] Forwarded 1 Cisco malformed SNMP exploit; [3] Forwarded 362 Witty exploits; [4] Forwarded 1 Cisco exploit, disabled logging for 10 minutes; [5] Forwarded 370 Witty exploits; [6] Forwarded 280 Witty exploits; [7] Disabled logging for 10 minutes; [8] Forwarded 322 Witty exploits, incorrectly labeled some exploits as SYN floods despite pure UDP load; [9] Disabled logging for 10 minutes; [10] Forwarded 159 Witty exploits, incorrectly labeled some exploits as SYN floods despite pure UDP load.

It was a very different story when we offered exploit traffic, with most systems slowing down sharply. The lone exception is ipAngel, which moved traffic at rates under heavy attack that were equal to or better than its rates in the baseline test. All others slowed substantially under heavy attack - and worse, some forwarded exploit traffic.

The IPS 5500 leaked a small amount of Witty worm traffic at all three attack rates we used - 1%, 4% and 16% of its TCP packet-per-second rate. The vendor blamed a misconfiguration of its firewall policy (vendors configured device security for this project). With its default firewall policy enabled, Top Layer says its device would have blocked exploits targeting any port not covered by the vendor's Witty signature.

The TippingPoint 5000E leaked a small amount of malformed Cisco SNMP traffic when it was offered at 4% and 16% of the device's maximum forwarding rate, even after we applied a second and third signature update.

Further, with attacks at the 16% rate, the TippingPoint device disabled all alerts (it continued to block exploits but didn't log anything) for 10 minutes. TippingPoint calls this a load-mitigation feature and says customers overwhelmingly prefer this setting to having the device shut down if it becomes overloaded.

Page 5 of 9

We understand that device behavior during overload is ultimately a policy decision. For enterprises where high availability trumps security, the ability to continue forwarding packets is essential - even if it means a temporary shutdown of IPS monitoring. More-paranoid sites might block all traffic in response to an overload. In this test, the TippingPoint and NFR devices (and possibly others) explicitly give users a choice of behaviors, a desirable feature in our view.

In terms of HTTP response time, NFR's Sentivist Smart Sensor delivered Web pages the fastest, at an average of about 144msec for an 11KB object. This is the average time it took for each of 1,500 users to request and retrieve a Web page with a single 11KB object, with no attack traffic present. The NFR sensor also flew through the 1% and 4% attack tests, with response times lower than those for all other vendors' baseline measurements.

Something went horribly wrong for the Sentivist device in the 16% attack test, however, with response times registering nearly 80 times higher than in the baseline test. It could be simply an anomalous result; response time didn't increase nearly as much in the two and four port-pair tests on the Sentivist device. Further, the device's latency spiked only when hit with exploit traffic at more than 60Mbps, suggesting a serious and dedicated denial-of-service (DoS) attack was underway. After we concluded testing, NFR says it identified and corrected a CPU oversubscription issue, but we did not verify this.

Among other devices, ipAngel's response time degraded the least as we ratcheted up attack rates. This isn't too surprising, considering its sensor's powerful hardware the vendor supplied for testing. The ipAngel sensor had eight dual-core Opteron CPUs.

It's important to note that all results presented here are averages over the three-minute steady-state phase of our tests. These averages are valid, but they don't tell the whole story. As dramatic as the reduction in the average performance was in some tests, actual results over time show an even sharper drop in response to attacks (see link to TCP forwarding rates under attack, below).

Click to see: TCP forwarding rates under attack over time
TCP forwarding rates under attack over time

All IPS systems slowed traffic to some extent under our heaviest attack, but the degradation differed in terms of degree and duration. ipAngel's rates degraded the least, although the rate at the end of the test for this product was 824Mbps, more than 100Mbps lower than the system's 929Mbps rate at the beginning of the test. Top Layer's IPS 5500 did the best job of bouncing back to its original rate after an attack, but even so it momentarily slowed down traffic by more than 550Mbps, to less than 400Mbps. Whether users would notice this slowdown depends on the application. Something involving sustained high-speed data transfer (for example, FTP) would experience a brief slowdown.

Page 6 of 9

The TippingPoint 5000E's rates dipped to 10Mbps under attack, down from around 400Mbps, and it's even worse for the others, with rates going down all the way to zero. The Demarc and NFR numbers suggest an overload, while the Fortinet device appears to recover, then falter again.

The sharp fall in TCP rates also has an effect on HTTP page-response time (see link graphic, below):

Click to see: HTTP response time under attack
HTTP response time under attack

Response time - the interval between a client requesting and receiving a Web page - is only a few hundred milliseconds in baseline tests. Under our heaviest attack, however, many IPS systems introduced delays running well into the seconds. Ambiron TrustWave and Top Layer IPS systems did the best job of maintaining low and consistent response time under attack.

These results show that IPS devices have the potential to cause significant delays in network performance, way out of proportion to the amount of malicious traffic in the network. In effect, an IPS could be the instrument that delivers a self-inflicted DoS attack, where a small amount of attack traffic can make a gigabit network painfully slow for Web traffic and completely unusable for file and print service.

After testing concluded, Demarc said new performance parameters in the Bivio sensor hardware it uses would have dramatically improved its numbers. Unfortunately, time constraints prevented us from verifying that.

We also measured UDP throughput. We consider the UDP data less important than the TCP data, because UDP typically is a much smaller percentage of traffic on the Internet side of production networks, but these tests still are a useful way to describe the absolute limits of device forwarding and delay. If you plan to put the IPS deep in your network, UDP traffic from sources such as backups or storage servers could form the bulk of your traffic.

Most devices moved midsize and large UDP packets at or near the theoretical line rate. The two exceptions were FortiGate-3600, which moved midsize packets at about 50% of line rate, and ipAngel, which moved UDP traffic (for all packet lengths) at far lower rates than it moved TCP traffic. Ambiron TrustWave says its sensor used betas of interface device drivers and later versions show higher throughput and lower latency with UDP; we did not verify this.

Page 7 of 9

As in the TCP tests, latency in the UDP testing also spiked sharply when we subjected most IPS systems to attack, with hundredfold (or more) increases in delay not uncommon. The only exception was ipAngel, which delayed packets by roughly the same amount in the attack tests as in the baseline test. This could be attributable to the ipAngel's UDP throughput, which is much lower than that of the other devices in this test.

We gave all vendors an opportunity to review and respond to test results before publication. TippingPoint found in internal testing that latency would have been far lower had we measured at 95%, not 100% of the throughput rate. Top Layer asked for a smaller reduction in load (perhaps to 99.9%) and attributed its increased UDP latency to clocking differences between our test tools and its IPS.

While lower loads probably would have produced lower delays, we respectfully disagree with both vendors' suggestions, on two grounds. First, as described in RFC 2544 - the industry standard for network device performance benchmarking - latency is measured at the throughput rate and not at X percent of the throughput rate, where X is some number that produces "good" latency.

Second, neither vendor's device bears a sticker warning customers that rates should never exceed X percent of line rate. If vendors want to claim high throughput, they also should measure latency at the throughput level.

Page 8 of 9
Two port-pairs

In baseline TCP performance tests, the IPS 5500 was the fastest device, with the TippingPoint 5000E not far behind (see link to The IPS torture test, scenario 2, below).The Ambiron TrustWave, Demarc and NFR devices all moved TCP traffic at rates much further below the theoretical maximum than in the single port-pair tests.

Click to see: The IPS torture test: scenario 2
The IPS torture test: scenario 2
Testing with two port pairs
Throughput (Mbps) Perfect device Ambiron TrustWave Demarc NFR TippingPoint Top Layer
TCP baseline 1,930 1,013 1,446 382 1,825 1,911
TCP plus 1% attack 1,930 990 1,310

351
1,830 [11] 1837 [12]
TCP plus 4% attack 1,930 937 782 307 1,340 [13] 1429 [14]
TCP plus 16% attack 1,930 759 498 205 1,340 [15] 1254 [16]
UDP baseline, 64-byte frames 3,048 82 288 712 1,226 605
UDP baseline, 512-byte frames 3,850 602 2,009 2,009 3,850 3,850
UDP baseline, 1518-byte frames 3,948 1,172 1,977 1,977 3,948 3,948
Latency (millisec) Perfect device Ambiron TrustWave Demarc NFR TippingPoint Top Layer
TCP baseline N/A 268.07 158.03 146.26 71.70 274.42
TCP plus 1% attack traffic N/A 269.05 169.73 162.89 86.11 [11] 84.95 [12]
TCP plus 4% attack traffic N/A 304.24 365.64 194.34 1,001.69 [13] 179.64 [14]
TCP plus 16% attack traffic N/A 460.65 16,692.43 7,074.89 1,062.49 [15] 1,338.22 [16]
UDP baseline N/A 0.09 0.31 0.09 0.08 2.33
UDP plus 1% attack traffic N/A 0.13 202.30 0.12 4.16 [17] 12.35 [18]
UDP plus 4% attack traffic N/A 0.18 391.80 0.10 8.95 [19] 12.18 [20]
UDP plus 16% attack traffic N/A 5.32 566.80 0.64 6.63 [21] 7.15 [22]
Footnotes: [11] Forwarded 9 Cisco malformed SNMP exploits; [12] Forwarded 174 Witty exploits; [13] Forwarded 13 Cisco exploits, disabled logging for 10 minutes; [14] Forwarded 524 Witty exploits; [15] Forwarded 57 Cisco exploits, disabled logging for 10 minutes; [16] Forwarded 1158 SQL Slammer, 1140 Witty, and 1138 Cisco exploits; [17] Disabled logging for 10 minutes; [18] Forwarded 199 Witty exploits, incorrectly labled some exploits as SYN floods despite pure UDP load; [19] Disabled logging for 10 minutes; [20] Forwarded 139 Witty exploits, incorrectly labled some exploits as SYN floods despite pure UDP load; [21] Disabled logging for 10 minutes; [22] Forwarded 33 Witty exploits, incorrectly labled some exploits as SYN floods despite pure UDP load.

The Top Layer and TippingPoint devices also produced the highest rates in the attack tests, but results were problematic. The TippingPoint 5000E forwarded a small amount of Cisco exploit traffic in all three of our attack tests, and disabled logging in our 4% and 16% attack tests. The Top Layer device forwarded small amounts of Witty worm traffic in all three attack tests. The issues for both vendors were the same as in the single port-pair tests: TippingPoint had a problem with the Cisco signature, and Top Layer had a problem with its firewall configuration.

The Sentarus sensor and ipAngel were the fastest IPS systems among devices that did not forward any exploit traffic. The Sentarus came out on top when we offered attacks at 1% of the TCP rate, moving traffic at close to the baseline speed. The ipAngel was quickest in the 4% and 16% attack tests, though rates were about 10% and 25% lower, respectively, than in the baseline test.

HTTP response times also shot up dramatically under attack, though in some cases the delays were lower with two port-pairs than with one. This could be attributed to device architecture, in which IPS sensors use dedicated CPUs and/or network processors for each port-pair.

In the UDP tests, the TippingPoint and Top Layer IPS systems were again the fastest, moving midsize and large frames at line rate. The Demarc and NFR devices were about half that fast: Both posted identical numbers, possibly because both use the same Bivio sensor hardware.

UDP latency was higher under attack than in the baseline tests, especially for Sentarus in the 16% attack test. However, excluding that one result, latency generally rose less with two port-pairs under attack than with one - again, possibly caused by distributed processing designs.

Page 9 of 9
Four port-pairs

With four pairs of Gigabit Ethernet interfaces (thus, rates theoretically capable of rising as high as 8Gbps), this was the acid test for IPS performance.

The TippingPoint 5000E was hands-down the fastest IPS in our TCP baseline tests (see link to The IPS torture test, scenario 3, below). It moved a mix of applications at 3.434Gbps, not far from the test bed's theoretical top rate of 3.8Gbps, and about twice as fast as the next quickest sensor, ipAngel.

Click to see: The IPS torture test: scenario 3
The IPS torture test: scenario 3
Testing with four port pairs
Throughput (Mbps) Perfect device Ambiron TrustWave Demarc NFR TippingPoint
TCP baseline 3,860 1,730 1,514 382 3,434
TCP plus 1% attack 3,860 1,692 1,268

351
3,402 [23]
TCP plus 4% attack 3,860 1,538 694 307 2,317 [24]
TCP plus 16% attack 3,860 1,317 350 205 1,875 [25]
UDP baseline, 64-byte frames 6,095 130 541 712 1,210
UDP baseline, 512-byte frames 7,699 1,203 2,556 2,009 4,018
UDP baseline, 1518-byte frames 7,896 2,400 2,899 1,977 4,454
Latency (millisec) Perfect device Ambiron TrustWave Demarc NFR TippingPoint
TCP baseline N/A 160.91 166.27 146.26 112.25
TCP plus 1% attack traffic N/A 167.80 237.12 162.89 110.72 [23]
TCP plus 4% attack traffic N/A 194.55 630.03 194.34 627.8[24]
TCP plus 16% attack traffic N/A 636.18 15,285.89 7,074.89 491.99 [25]
UDP baseline N/A 1.24 343.63 0.11 0.04
UDP plus 1% attack traffic N/A 7.13 205.78 0.10 28.02 [26]
UDP plus 4% attack traffic N/A 8.64 388.81 0.11 9.93 [27]
UDP plus 16% attack traffic N/A 16.61 566.74 0.11 5.85 [28]
Footnotes:: [23] Forwarded 1280 Cisco malformed SNMP exploits; [24] Forwarded 1128 Cisco exploits; [25] Forwarded 795 Cisco exploits, disabled logging for 10 minutes; [26] Disabled logging for 10 minutes; [27] Disabled logging for 10 minutes; [28] Disabled logging for 10 minutes.

In our attack tests, the TippingPoint 5000E again leaked small amounts of Cisco exploit traffic and also disabled logging in the 16% attack test.

Of devices with no security issues, ipAngel was fastest. As in tests with two port-pairs, ipAngel's TCP forwarding rates degraded as we ratcheted up attack rates, but on the other hand it did not leak any exploit traffic.

Most of the devices increased HTTP response time under attack, especially in the 16% attack test. In the worst case, response time through Sentarus spiked from 166msec in the baseline test to more than 15 seconds in the 16% attack test. That may have been attributable to a tuning parameter in the Bivio sensor, according to Demarc. Unfortunately, we learned about this parameter only after testing concluded.

TippingPoint's IPS was also the fastest in our UDP tests. In baseline tests it moved large packets at 4.454Gbps, the fastest single rate in our tests. It was also the top performer in baseline tests of short and medium-length packets.

Latency skyrocketed for multiple devices once we combined benign and attack UDP traffic. For example, the TippingPoint 5000E delayed benign UDP traffic by nearly 30 seconds in a test with attacks at 1% of its capacity, and the device also disabled logging in all three of our attack tests. The other products also slowed traffic by huge margins over the baseline test. The IPS with the best UDP latency under attack was Sentivist, not just with four port-pairs but indeed in all tests.

If the test results say anything, it's that performance and security are two very different goals, and - at least with these devices - the goals often may not bear any sensible relationship to one another.

These tests turned up two different kinds of IPS systems: devices that move traffic at very high rates, and devices that block attacks but aren't the speediest performers. Picking the right IPS comes down to finding the right balance between security and performance.

Newman is president of Network Test, an independent engineering services firm in Westlake Village, Calif. He can be reached at dnewman@networktest.com.


NW Lab Alliance

Newman and Joel Snyder are members of the Network World Lab Alliance, a cooperative of the premier reviewers in the network industry, each bringing to bear years of practical experience on every review. For more Lab Alliance information, including what it takes to become a member, go to www.networkworld.com/alliance.
Thanks to all

Network World gratefully acknowledges the vendors that supported this project. Spirent Communications supplied its Spirent ThreatEx, Avalanche, Reflector, SmartBits and AX/4000 test tools, and engineer Chuck McAuley assisted with ThreatEx configuration. Apcon supplied an Intellapatch virtual patch panel that tied together the test bed. And Red Hat supplied its Red Hat Enterprise Linux operating system, used on test-bed management servers.[/img]


--------本帖迄今已累计获得2安全币用户奖励--------


最后由 grantming 编辑于 2007-04-13 09:36,总共编辑了 2 次

回到顶部
 奖励本帖 用户资料  
 
显示帖子 :  排序  
发表新帖 回复这个主题  [ 45 篇帖子 ]  前往页数 1, 2, 3  下一页

当前时区为 UTC + 8 小时


在线用户

正在浏览此版面的用户:没有注册用户 和 2 位游客


不能 在这个版面发表主题
不能 在这个版面回复主题
不能 在这个版面编辑帖子
不能 在这个版面删除帖子
不能 在这个版面提交附件

前往 :  
cron
华安信达(CISPS.org) ©2003 - 2012