<div dir="ltr"><div><div>Hi,<br>Thank you for your response. You were right! the scaling was on, however even after I turned it off the CWND in the two approaches differ, but the values are much closer now (error rate is about 3% between my estimation and the value reported by Web10G)<br>
</div>Thanks for your help!<br></div>Mojgan <br>
</div><div class="gmail_extra"><br><br><div class="gmail_quote">On Fri, Jul 11, 2014 at 11:59 AM, <span dir="ltr"><<a href="mailto:Valdis.Kletnieks@vt.edu" target="_blank">Valdis.Kletnieks@vt.edu</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Fri, 11 Jul 2014 11:50:39 -0400, mojgan ghasemi said:<br>
<br>
> I have a connection that according to Web10G, never exists the SS, and its<br>
> SlowStart variable is 235, meaning that the CWND has been increased a total<br>
> of 235 times.<br>
><br>
> However, when I look at the packet traces of the connection using tcpdump,<br>
> I see more than 15K new ACKs being delieverd for this connection (even with<br>
> byte counting approach), which according to many RFCs such as 2581, each of<br>
> them must make the CWND to increase by one MSS. So why only increase CWND<br>
> 235 times?<br>
<br>
Is TCP Window Scaling in effect? If so, you could be seeing "steps" in<br>
the window. For example, say you start with a (hypothetical) window of<br>
12, and a scaling of 3, and packets coming in that each increase the window<br>
by 3.<br>
<br>
packet internal window on the wire<br>
0 12 1<br>
1 15 1<br>
2 18 2<br>
3 21 2<br>
4 24 3<br>
5 27 3<br>
6 30 3<br>
7 33 4<br>
<br>
You get the idea. 15K ACKs and 235 updates works out to 63 acks per<br>
update - making me take a wild guess that you have TCP scaling of 6<br>
in effect?<br>
<br>
<br>
</blockquote></div><br></div>