Linux 2.4 Kernel Tunings used in SPECweb99 results


File System Tunings
http://www.linuxhq.com/kernel/v2.4/doc/sysctl/fs.txt.html

/proc/sys/fs/file-max
	Maximum number of file-handles that the Linux kernel allocates,
	hence this establishes the maximum number of files that the system
	can have open at one time.
	Default: 4096
 	Benchmark setting: 128000 





IPv4 Tunings
URL: http://www.linuxhq.com/kernel/v2.4/doc/networking/ip-sysctl.txt.html

/proc/sys/net/core/rmem_max
/proc/sys/net/core/rmem_default
	Default and maximum size of the Linux socket input queues.  Larger
	queue sizes can lead to a reduced risk of delays, so long as one
	does have the extra memory for the increased queues readily available.
	Not all combinations of (older) kernels and (older) cards can properly
	handle larger queue sizes.
	Default: 65536
	Benchmark setting: 1048576

/proc/sys/net/core/wmem_max
/proc/sys/net/core/wmem_default
	Default and maximum size of the Linux socket output queues; this
	enables web servers to send more of a web page to the network stack
	in fewer system calls (ideally the entire web page in one call).
	Default: 65536
	Benchmark setting: 1048576

/proc/sys/net/ipv4/ip_local_port_range
	Set the range of ports that can be used.  [Note these values are
	subject to limitations within the 16bit field that IPv4 uses.]
	Ports can be used up by network connections in various connected
	and tear-down states.  If/When no more ports are available, any
	attempt to open a new connect() will fail.
	Default: 32768 61000
	Benchmark setting: 16384 65536 

/proc/sys/net/ipv4/ip_forward
	Do we, or do we not, forward packets between different network
	interfaces on this system.
	Default: 0
	Benchmark setting: 0 {we will not be forward any packets}

/proc/sys/net/ipv4/tcp_sack
	Do we, or do we not, use the optional TCP selective acknowledgements.
	[Most useful on lossy, congested, long-delay, or other erratic networks]
	Default: 1
	Benchmark setting: 0 {do not use selective acknowledgements}

/proc/sys/net/ipv4/tcp_timestamps= 0 
	Do we, or do we not, use the RFC1323 TCP timestamps.
	[Most useful on lossy, congested, long-delay, or other erratic networks]
	Default: 1
	Benchmark setting: 0 {do not generate timestamps}

/proc/sys/net/ipv4/tcp_window_scaling
	Do we, or do we not, use the RFC1323 TCP window scaling.
	[Most useful on lossy, congested, long-delay, or other erratic networks]
	Default: 1
	Benchmark setting: 0 {do not use selective acknowledgements}

/proc/sys/net/ipv4/tcp_max_tw_buckets
	Maximal number of timewait sockets held by system simultaneously.
	[Web servers with very high hit rates can generate very large numbers
	of connections in the TIME_WAIT state, and SPECweb99 rules require the
	web servers to maintain this state for at least 60 seconds.]
	Benchmark setting: 2000000 





VM Tunings
URL: http://www.linuxhq.com/kernel/v2.4/doc/sysctl/vm.txt.html

/proc/sys/vm/bdflush
	Tuning values for the bdflush kernel daemon (flushes dirty buffers).
	Parameter 1 is what percentage is dirty before daemon acts.
	Parameter 2 is the maximal number of buffers to flush at one time
	Parameter 3 is the number of buffers to add to the free list each time
	Parameter 4 is the number of dirty buffers before waking bdflush
	Parameter 5 is ignored
	Parameter 6 is time for normal buffer to age before flushing
	Parameter 7 is time for superblock to age before flushing
	Parameter 8 is ignored
	Parameter 9 is ignored
	Default: 40, 500, 64, 256, 15, 30*HZ, 5*HZ, 1884, 2
	Benchmark setting: 100 5000 640 2560 150 30000 5000 1884 2