Linux grafic performance (v1.5.1) - perspectives

Because of this Linux graphic performance discussion:

 

Spoiler

On 7.3.2018 at 9:27 AM, [John161](< base_url >/?app=core&module=members&controller=profile&id=143536) said:

Yep, that performance boost on Linux was long overdue. I finally do have constantly 60+fps also in big battles.

 

I personally would like to be able to zoom further out without loosing the detailed map, so far I like the old map more because I was able to see more at once. The Zone one is rather useless.

 

Spoiler

8 hours ago, [Niran](< base_url >/?app=core&module=members&controller=profile&id=36423) said:

Man is it really that bad on Linux or is everyone just playing on a toaster? I’m getting 120-200 FPS everywhere and i’m using 6 year old hardware (gtx670 and amd fx 6200) and this hasn’t changed in the past years

 

Spoiler

15 hours ago, [John161](< base_url >/?app=core&module=members&controller=profile&id=143536) said:

Yep, it was that bad. The game on Linux is/was CPU bound.

Before the performance was <50% of Windows in big battles and ~75% in smaller once (53-80fps in 12v12 battles in PvP for me, 2v2 was running with ~120fps).

Now I have mostly at least 50% more fps, so around 70-80fps as minimum. Ellydium Hangar even saw an 300% boost, from 60fps to ~180fps.

And my hardware is an i7 4770k @4.2GHz and a r9 280x (5a old) so a bit faster than yours. Avarshina has AFAIK some i5 and a gt 640.

 

I have this hardware (7-8 years old):

- CPU Intel® Core™ i5 @ 3.40GHz × 4

- NVIDIA GK107 [GeForce GT 640]

- RAM 15,6 GiB

 

I was planning to buy a better graphics card because my old GT 640 only gives me 70-75 FPS in hangar after the v1.5.1 ‘Journey’ patch of SC (formerly: ~30-35 FPS)

 

Q: John says, SC was CPU bound, so I should invest in a better CPU?

Q: New Linux Kernel has better AMD proprietary drivers now, should I go with AMD or NVIDIA (E.G. 1050)?

 

Some discussion might help. If you know, please share!

 

P.S.: [linux graphics bug](< base_url >/index.php?/topic/36157-graphic-bugs-with-nvidia-384-binary-driver/&tab=comments#comment-420910) (do you have some hints? better driver?)

 

22 hours ago, avarshina said:

Q: New Linux Kernel has better AMD proprietary drivers now, should I go with AMD or NVIDIA (E.G. 1050)?

The MESA (FLOSS Driver) does have better performance than the proprietary once.

But don’t ask me which Vendor does work better for Star Conflict. Nvidia has graphic bugs that you don’t have in MESA and vice versa, and performance wise idk.

22 hours ago, avarshina said:

Q: John says, SC was CPU bound, so I should invest in a better CPU?

I will check some stuff tomorrow to see if one of my cores still has 100% utilization while my GPU is probably at 50% max.

As always it depends on your CPU/GPU combination. E.g. a i7-4700k @4GHz will be a limiting factor if you have anything faster then probably a gtx 660 / amd r7 270x.

Laptops will probably run into CPU limitations far earlier because of the mostly far lower single core performance.

 

My 4 CPU cores  are (on low/medium/super Graphics settings) do not go over 50% and are redistributing processing threads

15 hours ago, avarshina said:

My 4 CPU cores  are (on low/medium/super Graphics settings) do not go over 50% and are redistributing processing threads

I have still one core constantly at 100% and  3 at 14% and a filth at 12.5% while having a gpu utilization of ~50%. So at least with my GPU, CPU commination it is CPU bound.

Also the r9 280x  is also more then twice as fast as a gt 640 so I would expect that with a desktop CPU @ ~4GHz you will run into CPU limitations when you have something that is faster than a gtx 660 or so.
 

Spoiler

fhFmrdN.png

 

 

john - locks like driver issue - ?

what system overview software you are using on your desktop-wallpaper?

1 hour ago, avarshina said:

what system overview software you are using on your desktop-wallpaper?

I’m using Gallium Hud which is a component of MESA.

1 hour ago, avarshina said:

john - locks like driver issue - ?

You mean, looks like driver issue?

I don’t think that it is related to drivers, but I don’t have a NVIDIA GPU accessible for testing it out.

 

Also if I take a look at my CPU utilization with a tool that has a way slower update rate (Gallium HUD is updating at 60Hz) like nearly every system-monitor does I will see a lower per core utilization because it will be averaged out over a whole second and the main thread will have jumped already between a couple of cores. This may be the reason why you say that 50% is you max utilization while in fact you have 100% for half a second and another 100% on another core for half a second.

16 hours ago, John161 said:

I’m using Gallium Hud which is a component of MESA.

You mean, looks like driver issue?

I don’t think that it is related to drivers, but I don’t have a NVIDIA GPU accessible for testing it out.

see this: https://www.heise.de/ct/ausgabe/2018-4-Linux-4-15-AMD-Vega-Support-RISC-V-Unterstuetzung-3954216.html

https://www.heise.de/ct/artikel/Die-Neuerungen-von-Linux-4-15-3900646.html

have you tried the proprietary amd drivers?

but you have reasonable fresh kernel - don’T you 4.13?

my ubuntu is running on :: Kernel Linux 4.13.0-36-generic x86_64

 

16 hours ago, John161 said:

Also if I take a look at my CPU utilization with a tool that has a way slower update rate (Gallium HUD is updating at 60Hz) like nearly every system-monitor does I will see a lower per core utilization because it will be averaged out over a whole second and the main thread will have jumped already between a couple of cores. This may be the reason why you say that 50% is you max utilization while in fact you have 100% for half a second and another 100% on another core for half a second.

Interesting remarks, I will try to investigat this further. Have you read about this flaw in the  resolution of the standad system utilization tool (graphic usage monitoring tool)?

[@John161](< base_url >/index.php?/profile/143536-john161/) I had console tool glmark2 installed and running.

I had an glmark2 Score: 3249 (fps)

=======================================================
    glmark2 2014.03+git20150611.fa71af2d
=======================================================
    OpenGL Information
    GL_VENDOR: NVIDIA Corporation
    GL_RENDERER: GeForce GT 640/PCIe/SSE2
    GL_VERSION: 4.5.0 NVIDIA 384.111
=======================================================
[build] use-vbo=false: FPS: 3499 FrameTime: 0.286 ms
[build] use-vbo=true: FPS: 4352 FrameTime: 0.230 ms
[texture] texture-filter=nearest: FPS: 3288 FrameTime: 0.304 ms
[texture] texture-filter=linear: FPS: 3217 FrameTime: 0.311 ms
[texture] texture-filter=mipmap: FPS: 3940 FrameTime: 0.254 ms
[shading] shading=gouraud: FPS: 3570 FrameTime: 0.280 ms
[shading] shading=blinn-phong-inf: FPS: 3088 FrameTime: 0.324 ms
[shading] shading=phong: FPS: 2830 FrameTime: 0.353 ms
[shading] shading=cel: FPS: 2920 FrameTime: 0.342 ms
[bump] bump-render=high-poly: FPS: 2089 FrameTime: 0.479 ms
[bump] bump-render=normals: FPS: 4140 FrameTime: 0.242 ms
[bump] bump-render=height: FPS: 4172 FrameTime: 0.240 ms
[effect2d] kernel=0,1,0;1,-4,1;0,1,0;: FPS: 2298 FrameTime: 0.435 ms
[effect2d] kernel=1,1,1,1,1;1,1,1,1,1;1,1,1,1,1;: FPS: 1538 FrameTime: 0.650 ms
[pulsar] light=false:quads=5:texture=false: FPS: 3358 FrameTime: 0.298 ms
[desktop] blur-radius=5:effect=blur:passes=1:separable=true:windows=4: FPS: 1053 FrameTime: 0.950 ms
[desktop] effect=shadow:windows=4: FPS: 1474 FrameTime: 0.678 ms
[buffer] columns=200:interleave=false:update-dispersion=0.9:update-fraction=0.5:update-method=map: FPS: 885 FrameTime: 1.130 ms
[buffer] columns=200:interleave=false:update-dispersion=0.9:update-fraction=0.5:update-method=subdata: FPS: 1123 FrameTime: 0.890 ms
[buffer] columns=200:interleave=true:update-dispersion=0.9:update-fraction=0.5:update-method=map: FPS: 929 FrameTime: 1.076 ms
[ideas] speed=duration: FPS: 3067 FrameTime: 0.326 ms
[jellyfish] <default>: FPS: 1686 FrameTime: 0.593 ms
[terrain] <default>: FPS: 208 FrameTime: 4.808 ms
[shadow] <default>: FPS: 2553 FrameTime: 0.392 ms
[refract] <default>: FPS: 498 FrameTime: 2.008 ms
[conditionals] fragment-steps=0:vertex-steps=0: FPS: 4110 FrameTime: 0.243 ms
[conditionals] fragment-steps=5:vertex-steps=0: FPS: 6251 FrameTime: 0.160 ms
[conditionals] fragment-steps=0:vertex-steps=5: FPS: 6569 FrameTime: 0.152 ms
[function] fragment-complexity=low:fragment-steps=5: FPS: 5374 FrameTime: 0.186 ms
[function] fragment-complexity=medium:fragment-steps=5: FPS: 6257 FrameTime: 0.160 ms
[loop] fragment-loop=false:fragment-steps=5:vertex-steps=5: FPS: 5404 FrameTime: 0.185 ms
[loop] fragment-steps=5:fragment-uniform=false:vertex-steps=5: FPS: 5769 FrameTime: 0.173 ms
[loop] fragment-steps=5:fragment-uniform=true:vertex-steps=5: FPS: 5720 FrameTime: 0.175 ms
=======================================================
                                  glmark2 Score: 3249 
=======================================================

The kicker: in my system monitoring tool (MATE) I had 1 out of 4 cores maxed 100% out and thread/core context switches every 30s or so.

I tweaked it to 250 ms resolution

my other system monitoring tools (50ms c. 20 HZ) really showed no core maxed out  - as you said !!

 

Now I tried it on running SC: spikes at 80%, mean between 20-60% (all 4 cores) core context switches a little up (from 3 Hz to 4-5 Hz)

it looks really healthy

 

the problem I have is what cpu for the want-to-buy NVIDIA G1050Ti?? is my i5 @3.4 GHz good enough?