FAQ
Hi,

I have a centos 5.6 xen vps which loses network connectivity once in a
while with following error.

=========================================
-bash-3.2# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
ping: sendmsg: No buffer space available
ping: sendmsg: No buffer space available
ping: sendmsg: No buffer space available
ping: sendmsg: No buffer space available
=========================================

All my investigation so far led me to believe that it is because
skbuff cache getting full.

=========================================================================

PROC-SLABINFO
skbuff_fclone_cache 227 308 512 7 1 : tunables 54
27 8 : slabdata 44 44 0
skbuff_head_cache 1574 1650 256 15 1 : tunables 120 60
8 : slabdata 110 110 0

SLAB-TOP
Active / Total Objects (% used) : 2140910 / 2200115 (97.3%)
Active / Total Slabs (% used) : 139160 / 139182 (100.0%)
Active / Total Caches (% used) : 88 / 136 (64.7%)
Active / Total Size (% used) : 512788.94K / 520252.14K (98.6%)
Minimum / Average / Maximum Object : 0.02K / 0.24K / 128.00K

OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
664000 620290 93% 0.09K 16600 40 66400K buffer_head
409950 408396 99% 0.21K 22775 18 91100K dentry_cache
343056 340307 99% 0.08K 7147 48 28588K selinux_inode_security
338590 336756 99% 0.74K 67718 5 270872K ext3_inode_cache
143665 143363 99% 0.06K 2435 59 9740K size-64
99540 99407 99% 0.25K 6636 15 26544K size-256
96450 96447 99% 0.12K 3215 30 12860K size-128
60858 60858 100% 0.52K 8694 7 34776K radix_tree_node
12420 11088 89% 0.16K 540 23 2160K vm_area_struct
5895 4185 70% 0.25K 393 15 1572K filp
4816 3355 69% 0.03K 43 112 172K size-32
2904 2810 96% 0.09K 66 44 264K sysfs_dir_cache
2058 1937 94% 0.58K 343 6 1372K proc_inode_cache
1728 1215 70% 0.02K 12 144 48K anon_vma
1650 1590 96% 0.25K 110 15 440K skbuff_head_cache
1498 1493 99% 2.00K 749 2 2996K size-2048
1050 1032 98% 0.55K 150 7 600K inode_cache
792 767 96% 1.00K 198 4 792K size-1024
649 298 45% 0.06K 11 59 44K pid
600 227 37% 0.09K 15 40 60K journal_head
590 298 50% 0.06K 10 59 40K delayacct_cache
496 424 85% 0.50K 62 8 248K size-512
413 156 37% 0.06K 7 59 28K fs_cache
404 44 10% 0.02K 2 202 8K biovec-1
390 293 75% 0.12K 13 30 52K bio
327 327 100% 4.00K 327 1 1308K size-4096
320 190 59% 0.38K 32 10 128K ip_dst_cache
308 227 73% 0.50K 44 7 176K skbuff_fclone_cache
258 247 95% 0.62K 43 6 172K sock_inode_cache
254 254 100% 1.84K 127 2 508K task_struct
252 225 89% 0.81K 28 9 224K signal_cache
240 203 84% 0.73K 48 5 192K shmem_inode_cache
204 204 100% 2.06K 68 3 544K sighand_cache
202 4 1% 0.02K 1 202 4K revoke_table
195 194 99% 0.75K 39 5 156K UDP
159 77 48% 0.07K 3 53 12K eventpoll_pwq
145 139 95% 0.75K 29 5 116K files_cache
144 41 28% 0.02K 1 144 4K journal_handle
140 140 100% 0.88K 35 4 140K mm_struct
140 77 55% 0.19K 7 20 28K eventpoll_epi
135 135 100% 2.12K 135 1 540K kmem_cache
121 45 37% 0.69K 11 11 88K UNIX
119 114 95% 0.52K 17 7 68K idr_layer_cache
118 41 34% 0.06K 2 59 8K blkdev_ioc
112 32 28% 0.03K 1 112 4K tcp_bind_bucket
110 56 50% 0.17K 5 22 20K file_lock_cache
106 35 33% 0.07K 2 53 8K avc_node
105 98 93% 1.50K 21 5 168K TCP
105 100 95% 1.04K 15 7 120K bio_map_info
92 1 1% 0.04K 1 92 4K dnotify_cache
80 18 22% 0.19K 4 20 16K tw_sock_TCP
70 44 62% 0.27K 5 14 20K blkdev_requests
59 23 38% 0.06K 1 59 4K biovec-4
59 13 22% 0.06K 1 59 4K fib6_nodes
59 11 18% 0.06K 1 59 4K ip_fib_hash
59 11 18% 0.06K 1 59 4K ip_fib_alias
53 53 100% 0.07K 1 53 4K taskstats_cache
53 1 1% 0.07K 1 53 4K inotify_watch_cache
48 48 100% 0.16K 2 24 8K sigqueue
48 8 16% 0.08K 1 48 4K crq_pool
45 27 60% 0.25K 3 15 12K mnt_cache
45 29 64% 0.25K 3 15 12K dquot
45 32 71% 0.25K 3 15 12K sgpool-8
40 19 47% 0.19K 2 20 8K key_jar
32 32 100% 0.50K 4 8 16K sgpool-16
32 32 100% 1.00K 8 4 32K sgpool-32
32 32 100% 2.00K 16 2 64K sgpool-64
32 32 100% 4.00K 32 1 128K sgpool-128
31 31 100% 8.00K 31 1 248K size-8192
30 27 90% 1.54K 6 5 48K blkdev_queue
30 14 46% 0.12K 1 30 4K request_sock_TCP
30 3 10% 0.12K 1 30 4K inet_peer_cache

FREE-M
total used free shared buffers cached
Mem: 8192 4821 3370 0 500 2793
-/+ buffers/cache: 1527 6664
Swap: 8191 0 8191

PROC-MEMINFO
MemTotal: 8388608 kB
MemFree: 3451384 kB
Buffers: 512352 kB
Cached: 2860580 kB
SwapCached: 0 kB
Active: 2971812 kB
Inactive: 1178908 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 8388608 kB
LowFree: 3451384 kB
SwapTotal: 8388600 kB
SwapFree: 8388600 kB
Dirty: 644 kB
Writeback: 0 kB
AnonPages: 777524 kB
Mapped: 24816 kB
Slab: 557716 kB
PageTables: 16300 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 12582904 kB
Committed_AS: 1624128 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 4556 kB
VmallocChunk: 34359733771 kB
===========================================================

Can any one provide any insights into a possible solution of this issue.

--
Kind Regards,
Sherin

Search Discussions

  • Mailing Lists at Sep 1, 2011 at 10:08 pm
    I haven't worked with xen in a few months, but I'd highly suggest looking at
    the xen host server itself instead of the vps. Setup some sort of
    monitoring on the VPS, coordinate the time it looses internet connection to
    the host server logs, maybe it'll provide some insight. Unless, its this
    VPS. Was this built off of XEN's templates or a P2V system?
    On Thu, Sep 1, 2011 at 12:36 AM, Sherin George wrote:

    Hi,

    I have a centos 5.6 xen vps which loses network connectivity once in a
    while with following error.

    =========================================
    -bash-3.2# ping 8.8.8.8
    PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
    ping: sendmsg: No buffer space available
    ping: sendmsg: No buffer space available
    ping: sendmsg: No buffer space available
    ping: sendmsg: No buffer space available
    =========================================

    All my investigation so far led me to believe that it is because
    skbuff cache getting full.

    =========================================================================

    PROC-SLABINFO
    skbuff_fclone_cache 227 308 512 7 1 : tunables 54
    27 8 : slabdata 44 44 0
    skbuff_head_cache 1574 1650 256 15 1 : tunables 120 60
    8 : slabdata 110 110 0

    SLAB-TOP
    Active / Total Objects (% used) : 2140910 / 2200115 (97.3%)
    Active / Total Slabs (% used) : 139160 / 139182 (100.0%)
    Active / Total Caches (% used) : 88 / 136 (64.7%)
    Active / Total Size (% used) : 512788.94K / 520252.14K (98.6%)
    Minimum / Average / Maximum Object : 0.02K / 0.24K / 128.00K

    OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
    664000 620290 93% 0.09K 16600 40 66400K buffer_head
    409950 408396 99% 0.21K 22775 18 91100K dentry_cache
    343056 340307 99% 0.08K 7147 48 28588K
    selinux_inode_security
    338590 336756 99% 0.74K 67718 5 270872K ext3_inode_cache
    143665 143363 99% 0.06K 2435 59 9740K size-64
    99540 99407 99% 0.25K 6636 15 26544K size-256
    96450 96447 99% 0.12K 3215 30 12860K size-128
    60858 60858 100% 0.52K 8694 7 34776K radix_tree_node
    12420 11088 89% 0.16K 540 23 2160K vm_area_struct
    5895 4185 70% 0.25K 393 15 1572K filp
    4816 3355 69% 0.03K 43 112 172K size-32
    2904 2810 96% 0.09K 66 44 264K sysfs_dir_cache
    2058 1937 94% 0.58K 343 6 1372K proc_inode_cache
    1728 1215 70% 0.02K 12 144 48K anon_vma
    1650 1590 96% 0.25K 110 15 440K skbuff_head_cache
    1498 1493 99% 2.00K 749 2 2996K size-2048
    1050 1032 98% 0.55K 150 7 600K inode_cache
    792 767 96% 1.00K 198 4 792K size-1024
    649 298 45% 0.06K 11 59 44K pid
    600 227 37% 0.09K 15 40 60K journal_head
    590 298 50% 0.06K 10 59 40K delayacct_cache
    496 424 85% 0.50K 62 8 248K size-512
    413 156 37% 0.06K 7 59 28K fs_cache
    404 44 10% 0.02K 2 202 8K biovec-1
    390 293 75% 0.12K 13 30 52K bio
    327 327 100% 4.00K 327 1 1308K size-4096
    320 190 59% 0.38K 32 10 128K ip_dst_cache
    308 227 73% 0.50K 44 7 176K skbuff_fclone_cache
    258 247 95% 0.62K 43 6 172K sock_inode_cache
    254 254 100% 1.84K 127 2 508K task_struct
    252 225 89% 0.81K 28 9 224K signal_cache
    240 203 84% 0.73K 48 5 192K shmem_inode_cache
    204 204 100% 2.06K 68 3 544K sighand_cache
    202 4 1% 0.02K 1 202 4K revoke_table
    195 194 99% 0.75K 39 5 156K UDP
    159 77 48% 0.07K 3 53 12K eventpoll_pwq
    145 139 95% 0.75K 29 5 116K files_cache
    144 41 28% 0.02K 1 144 4K journal_handle
    140 140 100% 0.88K 35 4 140K mm_struct
    140 77 55% 0.19K 7 20 28K eventpoll_epi
    135 135 100% 2.12K 135 1 540K kmem_cache
    121 45 37% 0.69K 11 11 88K UNIX
    119 114 95% 0.52K 17 7 68K idr_layer_cache
    118 41 34% 0.06K 2 59 8K blkdev_ioc
    112 32 28% 0.03K 1 112 4K tcp_bind_bucket
    110 56 50% 0.17K 5 22 20K file_lock_cache
    106 35 33% 0.07K 2 53 8K avc_node
    105 98 93% 1.50K 21 5 168K TCP
    105 100 95% 1.04K 15 7 120K bio_map_info
    92 1 1% 0.04K 1 92 4K dnotify_cache
    80 18 22% 0.19K 4 20 16K tw_sock_TCP
    70 44 62% 0.27K 5 14 20K blkdev_requests
    59 23 38% 0.06K 1 59 4K biovec-4
    59 13 22% 0.06K 1 59 4K fib6_nodes
    59 11 18% 0.06K 1 59 4K ip_fib_hash
    59 11 18% 0.06K 1 59 4K ip_fib_alias
    53 53 100% 0.07K 1 53 4K taskstats_cache
    53 1 1% 0.07K 1 53 4K inotify_watch_cache
    48 48 100% 0.16K 2 24 8K sigqueue
    48 8 16% 0.08K 1 48 4K crq_pool
    45 27 60% 0.25K 3 15 12K mnt_cache
    45 29 64% 0.25K 3 15 12K dquot
    45 32 71% 0.25K 3 15 12K sgpool-8
    40 19 47% 0.19K 2 20 8K key_jar
    32 32 100% 0.50K 4 8 16K sgpool-16
    32 32 100% 1.00K 8 4 32K sgpool-32
    32 32 100% 2.00K 16 2 64K sgpool-64
    32 32 100% 4.00K 32 1 128K sgpool-128
    31 31 100% 8.00K 31 1 248K size-8192
    30 27 90% 1.54K 6 5 48K blkdev_queue
    30 14 46% 0.12K 1 30 4K request_sock_TCP
    30 3 10% 0.12K 1 30 4K inet_peer_cache

    FREE-M
    total used free shared buffers cached
    Mem: 8192 4821 3370 0 500 2793
    -/+ buffers/cache: 1527 6664
    Swap: 8191 0 8191

    PROC-MEMINFO
    MemTotal: 8388608 kB
    MemFree: 3451384 kB
    Buffers: 512352 kB
    Cached: 2860580 kB
    SwapCached: 0 kB
    Active: 2971812 kB
    Inactive: 1178908 kB
    HighTotal: 0 kB
    HighFree: 0 kB
    LowTotal: 8388608 kB
    LowFree: 3451384 kB
    SwapTotal: 8388600 kB
    SwapFree: 8388600 kB
    Dirty: 644 kB
    Writeback: 0 kB
    AnonPages: 777524 kB
    Mapped: 24816 kB
    Slab: 557716 kB
    PageTables: 16300 kB
    NFS_Unstable: 0 kB
    Bounce: 0 kB
    CommitLimit: 12582904 kB
    Committed_AS: 1624128 kB
    VmallocTotal: 34359738367 kB
    VmallocUsed: 4556 kB
    VmallocChunk: 34359733771 kB
    ===========================================================

    Can any one provide any insights into a possible solution of this issue.

    --
    Kind Regards,
    Sherin
    _______________________________________________
    CentOS mailing list
    CentOS at centos.org
    http://lists.centos.org/mailman/listinfo/centos
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://lists.centos.org/pipermail/centos/attachments/20110901/7fc8ad56/attachment.html

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcentos @
categoriescentos
postedSep 1, '11 at 12:36a
activeSep 1, '11 at 10:08p
posts2
users2
websitecentos.org
irc#centos

2 users in discussion

Sherin George: 1 post Mailing Lists: 1 post

People

Translate

site design / logo © 2022 Grokbase