Bug #5191
Sunstone memory leak
Status: | Closed | Start date: | 06/13/2017 | |
---|---|---|---|---|
Priority: | Normal | Due date: | ||
Assignee: | Javi Fontan | % Done: | 0% | |
Category: | Sunstone | |||
Target version: | Release 5.4.1 | |||
Resolution: | worksforme | Pull request: | ||
Affected Versions: | OpenNebula 5.2 |
Description
Opennebula 5.2.1
MariaDB + Opennebula server + Sunstone are installed in VM with 2 gb total memory
OS: Centos 7.3
# top -b -n 1 -c PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3792 oneadmin 20 0 2077964 1,077g 7340 S 0,0 60,0 2:17.57 /usr/bin/ruby /usr/lib/one/sunstone/sunstone-server.rb
Steps:
1) Start Sunstone and login
2) Press F5 repeatedly in dashboard page or click Dashboard item in left menu
3) Each F5 consumes 1-2% of total memory
History
#1 Updated by Anonymous about 4 years ago
- Target version set to Release 5.2.1
Same thing on Debian 8.8
ruby 2.1.5p273 (2014-11-13) [x86_64-linux-gnu]
#2 Updated by Javi Fontan almost 4 years ago
- Category set to Sunstone
- Assignee set to Javi Fontan
- Target version changed from Release 5.2.1 to Release 5.4.1
#3 Updated by Javi Fontan almost 4 years ago
I'm not able to reproduce the problem in 5.4.0, CentOS 7. I've tested the standalone server with both memory and memcache sessions. The memory consumption topped at 200Mb, after that the Garbage Collector kicks in and frees memory.
Are you using stock ruby binary and have gems installed with packaged install_gems script?
#4 Updated by Anonymous almost 4 years ago
Are you using stock ruby binary and have gems installed with packaged install_gems script?
Yes, stock binary and gems installed. I didn't test it in 5.4, maybe it resolved. Now we use 5.2.1 with unicorn, memcache and unicorn-worker-killer. In that configuration there is no so much leaking.
#5 Updated by Javi Fontan almost 4 years ago
- Status changed from Pending to Closed
- Resolution set to worksforme
I'm closing it as we are not able to reproduce it on 5.4. If this happens again in this version we can reopen the issue.
Thanks for reporting.