[-] AernaLingus@hexbear.net 7 points 5 days ago

I just use a fuck-off massive case

[-] AernaLingus@hexbear.net 7 points 2 weeks ago

Beat me to it.

Israel does not control the US the US controls Israel.

Something Joe Biden will proudly admit!

[-] AernaLingus@hexbear.net 7 points 2 months ago

David P. Goldman is deputy editor of Asia Times and a fellow of the Claremont Institute’s Center for the American Way of Life.

What a fash-coded name for a think tank. Might as well call it the Center for Securing the Existence of Our People and a Future for White Children.

[-] AernaLingus@hexbear.net 6 points 2 months ago

If you're watching consecutive episodes of a series you can always just download them to your phone before you head to work. Not really viable if you hop around a lot, though.

[-] AernaLingus@hexbear.net 61 points 9 months ago

"Owning a car = freedom"

"You need a big truck/SUV to haul things" (it's just a coincidence that people drove much smaller cars before a multibillion dollar deluge of advertising)

"It's consumers' responsibility to reduce plastic pollution by recycling, and recycling is effective" (whoever came up with this one belongs in the PR scumfuck hall of fame)

[-] AernaLingus@hexbear.net 39 points 9 months ago

Original Phoronix article which has all the individual benchmarks—weird that they didn't link to it

[-] AernaLingus@hexbear.net 13 points 9 months ago

showing a maverick side

Supporting the status quo = maverick

[-] AernaLingus@hexbear.net 18 points 10 months ago

There's a variable that contains the number of cores (called cpus) which is hardcoded to max out at 8, but it doesn't mean that cores aren't utilized beyond 8 cores--it just means that the scheduling scaling factor will not change in either the linear or logarithmic case once you go above that number:

code snippet

/*
 * Increase the granularity value when there are more CPUs,
 * because with more CPUs the 'effective latency' as visible
 * to users decreases. But the relationship is not linear,
 * so pick a second-best guess by going with the log2 of the
 * number of CPUs.
 *
 * This idea comes from the SD scheduler of Con Kolivas:
 */
static unsigned int get_update_sysctl_factor(void)
{
	unsigned int cpus = min_t(unsigned int, num_online_cpus(), 8);
	unsigned int factor;

	switch (sysctl_sched_tunable_scaling) {
	case SCHED_TUNABLESCALING_NONE:
		factor = 1;
		break;
	case SCHED_TUNABLESCALING_LINEAR:
		factor = cpus;
		break;
	case SCHED_TUNABLESCALING_LOG:
	default:
		factor = 1 + ilog2(cpus);
		break;
	}

	return factor;
}

The core claim is this:

It’s problematic that the kernel was hardcoded to a maximum of 8 cores (scaling factor of 4). It can’t be good to reschedule hundreds of tasks every few milliseconds, maybe on a different core, maybe on a different die. It can’t be good for performance and cache locality.

On this point, I have no idea (hope someone more knowledgeable will weigh in). But I'd say the headline is misleading at best.

[-] AernaLingus@hexbear.net 21 points 11 months ago
[-] AernaLingus@hexbear.net 7 points 1 year ago

O Tannentag, o Tannentag...

view more: next ›

AernaLingus

joined 2 years ago