Schedulers...

Chat about Linux in general
Post Reply
User avatar
thx-1138
Level 7
Level 7
Posts: 1845
Joined: Fri Mar 10, 2017 12:15 pm
Location: Athens, Greece

Schedulers...

Post by thx-1138 » Thu Aug 22, 2019 4:06 pm

...interesting changes / stuff happening out there, some of which i also hadn't noticed earlier.
Just read over at Phoronix that the next Fedora will change the scheduler to BFQ,
and that Chromebooks already did so back in 20 August...
What i hadn't noticed is that from kernel 5.0 & onwards, single-queue schedulers were disabled:
such might be of a certain interest to people with SSDs that are booting with elevator=noop etc
Indeed...$ cat /sys/block/sda/queue/scheduler
[mq-deadline] none

However, what i found quite a bit interesting though are the benchmarks published in Ubuntu's wiki,
which more or less seem to have found BFQ to be...'avoided' in SSDs (last edited 2019-08-07):
https://wiki.ubuntu.com/Kernel/Reference/IOSchedulers
I don't really have much of an opinion on this (neither could i, heh...),
just got curious at how comes different entities in the desktop-related Linux world,
appear to have so different viewpoints (& synthetic?) test results regarding such...
Last edited by thx-1138 on Fri Aug 23, 2019 12:10 am, edited 1 time in total.

User avatar
catweazel
Level 19
Level 19
Posts: 9216
Joined: Fri Oct 12, 2012 9:44 pm
Location: Australian Antarctic Territory

Re: Schedulers...

Post by catweazel » Thu Aug 22, 2019 4:49 pm

thx-1138 wrote:
Thu Aug 22, 2019 4:06 pm
However, what i found quite a bit interesting though are the benchmarks published in Ubuntu's wiki,
which more or less seem to have found BFQ to be...'avoided' in SSDs (last edited 2019-08-07):
https://wiki.ubuntu.com/Kernel/Reference/IOSchedulers
I don't really have much of an opinion on this (neither could i, heh...),
just got curious at how comes different entities in the desktop-related Linux world,
appear to have so different viewpoints (& synthetic?) test results regarding such...
I've been running BFQ for a few days and I think it's great but I use it because I have hardware RAID that cops a battering from multiple applications at the same time. I also have a 3.2GB/s NVMe. The ubuntu article uses the word "avoid" a bit too loosely. BFQ isn't going to make a difference unless there are at least two applications accessing a block device at the same time. Any decent scheduler will achieve the same throughput in single-use mode because the scheduler just gives the requests to the disk without scheduling them.

The free clue to ubuntu's loose use of 'avoid' is in the statement directly under where they list it as 'avoid':
It is worth noting that there is little difference in throughput between the cfq/deadline/mq-deadline/kyber I/O schedulers when using fast multi-queue SSD configurations or fast NVME devices. In these cases it may be preferable to use a noop/none I/O scheduler to reduce CPU overhead.
I would avoid BFQ only with a slow CPU. In addition, udev rules can be set up to distinguish between an SSD and a HDD.

GRUB_CMDLINE_LINUX_DEFAULT

Code: Select all

quiet splash processor.max_cstate=1 intel_idle.max_cstate=1 mitigations=off[ scsi_mod.use_blk_mq=1
/etc/udev/rules.d/60-scheduler.rules

Code: Select all

# deadline for SSDs
ACTION=="add|change", KERNEL=="nvme0n1", TEST!="queue/rotational", ATTR{queue/scheduler}="deadline"
ACTION=="add|change", KERNEL=="nvme0n1", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="bfq"

ACTION=="add|change", KERNEL=="sd[b-z]", TEST!="queue/rotational", ATTR{queue/scheduler}="deadline"
ACTION=="add|change", KERNEL=="sd[b-z", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="bfq"

# bfq for hardware RAID
ACTION=="add|change", KERNEL=="sda", ATTR{queue/rotational}=="1", ATTR{queue/scheduler}="bfq"

Code: Select all

sudo cat /sys/block/sda/queue/scheduler
[sudo] password for xxx: 
mq-deadline [bfq] none
I sincerely doubt that the person who wrote the ubuntu wiki article knew the full story.
¡uʍop ǝpısdn sı buıɥʇʎɹǝʌǝ os ɐıןɐɹʇsnɐ ɯoɹɟ ɯ,ı

User avatar
Portreve
Level 8
Level 8
Posts: 2007
Joined: Mon Apr 18, 2011 12:03 am
Location: Florida
Contact:

Re: Schedulers...

Post by Portreve » Thu Aug 22, 2019 6:40 pm

catweazel wrote:
Thu Aug 22, 2019 4:49 pm
thx-1138 wrote:
Thu Aug 22, 2019 4:06 pm
However, what i found quite a bit interesting though are the benchmarks published in Ubuntu's wiki,
which more or less seem to have found BFQ to be...'avoided' in SSDs (last edited 2019-08-07):
https://wiki.ubuntu.com/Kernel/Reference/IOSchedulers
I don't really have much of an opinion on this (neither could i, heh...),
just got curious at how comes different entities in the desktop-related Linux world,
appear to have so different viewpoints (& synthetic?) test results regarding such...
I've been running BFQ for a few days and I think it's great but I use it because I have hardware RAID that cops a battering from multiple applications at the same time. I also have a 3.2GB/s NVMe. The ubuntu article uses the word "avoid" a bit too loosely. BFQ isn't going to make a difference unless there are at least two applications accessing a block device at the same time. Any decent scheduler will achieve the same throughput in single-use mode because the scheduler just gives the requests to the disk without scheduling them.

The free clue to ubuntu's loose use of 'avoid' is in the statement directly under where they list it as 'avoid':
It is worth noting that there is little difference in throughput between the cfq/deadline/mq-deadline/kyber I/O schedulers when using fast multi-queue SSD configurations or fast NVME devices. In these cases it may be preferable to use a noop/none I/O scheduler to reduce CPU overhead.
I would avoid BFQ only with a slow CPU. In addition, udev rules can be set up to distinguish between an SSD and a HDD.

GRUB_CMDLINE_LINUX_DEFAULT

Code: Select all

quiet splash processor.max_cstate=1 intel_idle.max_cstate=1 mitigations=off[ scsi_mod.use_blk_mq=1
/etc/udev/rules.d/60-scheduler.rules

Code: Select all

# deadline for SSDs
ACTION=="add|change", KERNEL=="nvme0n1", TEST!="queue/rotational", ATTR{queue/scheduler}="deadline"
ACTION=="add|change", KERNEL=="nvme0n1", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="bfq"

ACTION=="add|change", KERNEL=="sd[b-z]", TEST!="queue/rotational", ATTR{queue/scheduler}="deadline"
ACTION=="add|change", KERNEL=="sd[b-z", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="bfq"

# bfq for hardware RAID
ACTION=="add|change", KERNEL=="sda", ATTR{queue/rotational}=="1", ATTR{queue/scheduler}="bfq"

Code: Select all

sudo cat /sys/block/sda/queue/scheduler
[sudo] password for xxx: 
mq-deadline [bfq] none
I sincerely doubt that the person who wrote the ubuntu wiki article knew the full story.
I'm glad you're on our side! :lol:
I'm so down wit' dat', yo, dass ich unter dem Beton bin.

Presently rocking LinuxMint 19.2 Cinnamon.

Remember to mark your fixed problem [SOLVED].

All in all, you're just another brick in the wall.

User avatar
thx-1138
Level 7
Level 7
Posts: 1845
Joined: Fri Mar 10, 2017 12:15 pm
Location: Athens, Greece

Re: Schedulers...

Post by thx-1138 » Thu Aug 22, 2019 11:59 pm

catweazel wrote:
Thu Aug 22, 2019 4:49 pm
.....................
Thank you for sharing your current experience with BFQ, that's very interesting,
and i'd be pretty interested to hear if you happen to stumble upon any issues / quirks with such.
Not really that much into performance micro-tuning myself to be honest, mainly curiosity...
(i do however recall a cool tutorial you had posted for compiling the kernel with graysky's -march=native etc patches) :)

Did some further googling, and funnily enough, once again on Phoronix,
it appears that others as well had noticed Ubuntu's wiki entry about it couple months ago,
& wondered why it represents such a different viewpoint (first couple of comments)...

The ubuntu wiki entry itself appears to have been written from a Canonical kernel dev back in 2018-12-11,
with rather minimal edits since then, and says:
"The results are based on running 25 different synthetic I/O patterns generated using fio on ext2,
ext3, ext4, xfs and btrfs with the various I/O schedulers using the 4.19 kernel."
BFQ must have had quite a few weirdo issues here and there before 5.0,
as it appears that even Kolivas (ck-patchset) wasn't impressed either with it back at that time:
https://ck-hack.blogspot.com/2018/11/li ... 0-for.html
(some rather interesting, albeit dated comments there as well)...
And well, if nothing else, there's no question he's certainly into ultra-maximum performance tuning...

So far so good - and the above would seem a pretty nice & logical assumption:
'pre-5.0 related issues' & synthetic tests done on earlier kernels.
What partially spoils such an assumption though is the fact that...
Google enabled BFQ on Chromebooks under...4.19, hah.
And one thing is for sure - Google has all the spare resources in the world,
to run as much many tests as they want, compared to Canonical, Fedora, Phoronix, whoever...

Hence, i dunno. Might be linux...'politics' as well i assume...All in all, i guess you're right:
at that point in time, it's probably better for someone to more or less ignore the above,
and if curious / interested, just enable & test such under his/her own workload directly...

User avatar
catweazel
Level 19
Level 19
Posts: 9216
Joined: Fri Oct 12, 2012 9:44 pm
Location: Australian Antarctic Territory

Re: Schedulers...

Post by catweazel » Fri Aug 23, 2019 1:32 am

thx-1138 wrote:
Thu Aug 22, 2019 11:59 pm
catweazel wrote:
Thu Aug 22, 2019 4:49 pm
.....................
Thank you for sharing your current experience with BFQ, that's very interesting,
and i'd be pretty interested to hear if you happen to stumble upon any issues / quirks with such.
It's a genuine pleasure, and will post about it if I strike issues.

Cheers.
¡uʍop ǝpısdn sı buıɥʇʎɹǝʌǝ os ɐıןɐɹʇsnɐ ɯoɹɟ ɯ,ı

Post Reply

Return to “Chat about Linux”