Mercedes unveils 2025 electric G-Class, with 4 motors and tank turns

electrek.co - Comments

g class ev g580 EQ mercedes

Mercedes unveiled its 2025 electric G-Class tonight – which it’s calling the “G580 with EQ technology” – and we’re here in Beverly Hills at the reveal with all the details.

Mercedes first surprised us with its “EQG” concept at IAA in 2021. Now it’s heading to production, but with a somewhat more plain name.

At the time we had almost no details, but now we’re learning all about the upcoming electric off-roader here in the wilds of… Beverly Hills, California (a simultaneous reveal happened in China at the Beijing Auto Show).

So, maybe no heavy off-roading demonstrations are in the cards for today.

But the electric G-Class does have off-roading chops. It comes with 4 independent electric motors putting out a combined 579hp and 879 lb-ft of torque. Each motor has its own 2-speed transmission, giving access to a low-gear with 2:1 gear reduction for off-roading, and the 4 independent motors mean the car can vector torque to whichever wheels need it – even better than a locking differential.

4 wheel motors also means the G580 will be capable of what Mercedes calls G-Turn, its branding of what we’ve previously seen referred to as “tank turn” when Rivian was working on it (but later abandoned and pivoted to “front dig mode” instead). This means it will be able to do 2 full rotations on the spot by spinning the wheels on the left and right sides of the car in opposite directions at once.

However, this feature is more of a toy, just for fun. Mercedes also has a G-steering feature, which is sort of a mini-version of the G-turn, which will help you make extremely tight turns by activating torque vectoring and giving you just a little bit of spin (though unlike the EQS, it doesn’t have rear-wheel steering). Neither of these features will be great for tire wear.

The G580 can climb up to a 100% (45º) grade and hold stable on lateral slopes of up to 35º, ford 33.5 inches of water (6 inches deeper than the gas version), with 9.8 inches of ground clearance, a 32º approach angle, 30.7º departure angle and 20.3º breakover angle, with independent double wishbone suspension in the front and a solid de Dion axle in the rear.

To help you see where you’re going, the G580 has a “transparent hood” feature, which uses a camera to show what’s in front of and under the car on the internal display. This is important for off-roading, because if you’re going over a ridge or something and can’t see under the hood, the transparent hood can help you see where you’re going.

But it’s also a Mercedes, which means it’s fancy inside. And the 2025 model will be particularly fancy, as it’s only available in EDITION ONE trim with lots of exclusive interior and exterior touches. EDITION ONE will have a limited number of color options, but you can customize later editions of the car basically any way you want through Mercedes’ MANUFAKTUR car customization process.

So whether you’re conquering a real jungle or just the concrete jungle of… Rodeo Drive, or Las Vegas for the latest cryptocurrency convention, you’ll feel right at home in the Mercedes G-Class.

That fanciness is certainly needed to justify its price, which Mercedes hasn’t yet released, but said that it will be “in the ballpark” of the G63 (which starts at around $180,000).

In terms of exterior design, the G580 is basically the same as the gas G-Class, with the same boxy design. Unlike many EVs, it doesn’t adopt a particularly curvy exterior, and still has a textured grille area.

The decision to stick with a traditional-looking grille goes hand in hand with Mercedes’ recent decision to add a “more classic grille option” to its EQS. And it turns out, if you want the G580 with the traditional G-Class grille, you can just get the standard grille, directly from the gas version, if you prefer it (but then you don’t get those cool lights).

And overall, Mercedes said it was very important to maintain the overall design of the G-Class. So it hasn’t tweaked it to make it look electric, other than some grille modifications and a couple aero bits.

Mercedes says the vehicle has “optimized aerodynamics,” which was surely a primary design intent of this vehicle that consists solely of straight lines. But actually, there have been a couple small changes, like a slightly modified A-pillar and a strip above the windshield to smooth out the front edge of the roof. This balances out to a drag coefficient of… 0.44 (better than other G-Classes, but worse than, say, a Tesla Semi).

As for details on its electric drive capabilities, the aforementioned 4 motors can sprint to 60mph in an estimated 4.6 seconds, and reach a top speed of 112mph/180kmh. These aren’t the fastest numbers out there, but the car isn’t meant to be a racecar – Mercedes could have gone with a bigger battery, or more power, but that would have meant other compromises elsewhere, and Mercedes said that it was far more important to focus on the total package.

Mercedes hasn’t told us a range number yet, but with a 116kWh battery and a face that’s even flatter than its electric-triangle-on-wheels competition, we can imagine its somewhere in the mid-200s. It’s 473km on WLTP, which is 293mi, but WLTP is a little more lenient than EPA numbers.

More importantly than overall range, Mercedes says the G-Class will DC charge from 10-80% in 32 minutes, with a 200kW peak charging rate (and an 11kW AC charge rate). That maths out to an average charge rate of approximately 150kW on DC over the full session, which is reasonable, but not great. Also: the models we saw had CCS, not NACS.

Given the car’s big 116kWh (usable) battery, it still doesn’t charge nearly as fast as a Hyundai/Kia E-GMP car, but it’s still reasonably good compared to other chunky EVs. The G580 weighs ~6,800lbs/3,805kg, with a GVWR of exactly 3,500kg – the maximum allowed by German law. Part of that weight is a 127lb, inch-thick carbon skidplate under the car to protect the battery during extreme off-roading tasks (a steel plate would weigh ~3x as much).

The G580 comes with 5 regenerative braking settings, including Mercedes “D-auto” setting, where the car intelligently decides to apply regenerative braking based on traffic conditions (we recently tried this setting on the eSprinter, but struggled to find a situation where it would be useful). Regen activates off-throttle, suggesting the possibility of one-pedal driving, but we haven’t had a chance to try it out and see if its max 217kW regen braking capacity is really strong enough to avoid most brake pedal usage.

It also has a feature Mercedes calls “G-Roar,” a noise generator that allows you to simulate various electric drivetrain noises. Thankfully, this can be turned off.

For a final cool electric touch, the car has done something new with its iconic rear end. In place of the spare tire carrier that typically adorns the backside of the G-Class, there’s an optional compartment which can be used to store charging cables or the like. You can still opt for the spare tire, too, but I really like the charging box.

Mercedes was mum about what sort of electric/gas sales mix is expects out of the next generation, but it says that it has retained significant flexibility in its production plans so that it can adjust based on what customers and dealers ask for. It doesn’t plan to tell its dealers to push one or the other (which probably means the dealers will push the gas model, sigh), and its manufacturing partner (Magna Steyr) is ready to adapt to wherever the market goes.

Electrek’s Take

Look, this is a G-Class. It’s a statement car, it’s an image car. If you like it, you know that you like it (personally, I don’t, but I also don’t post on Instagram, so I guess I’m not the target audience). For the majority of drivers, its off-road capabilities really won’t matter all that much – but it’s still cool that it has them.

What matters here is whether it stays true to the G-Class, and as far as we can tell, it does. It looks like a G-Class and it feels like a G-Class. The doors thunk closed like a G-Class.

And an important note – Mercedes said, “if the G can go electric, any car can go electric.” We, of course, agree. This is a car that has been defined in many ways by excess, with the gas version getting just 14 miles per gallon. And yet here it is, in electric trim, with a huge battery (but not out of line with other huge EVs), beating the gas version’s performance both on- and off-road.

As for the name – while “G580 with EQ technology” is a bit of a mouthful, I actually like the simple designation “G580.” Surely people will refer to it as “the electric G-Class” or the like, but by giving the car a regular model name, Mercedes is saying that it’s treating the car like a regular car.

Instead of siloing EVs into their own sub-brand, Mercedes is saying that this is a G-Class, and if you want a G-Class, this is that. Mercedes was clear that this is not a rational vehicle, that its customers don’t need a G-Class, they want a G-Class.

So there you go. If you want a G-Class, this is a G-Class.

FTC: We use income earning auto affiliate links. More.

Windows 11 now comes with its own adware

www.engadget.com - Comments

It used to be that you could pay for a retail version of Windows 11 and expect it to be ad-free, but those days are apparently finito. The latest update to Windows 11 (KB5036980) comes out this week and includes ads for apps in the "recommended" section of the Start Menu, one of the most oft-used parts of the OS.

"The Recommended section of the Start menu will show some Microsoft Store apps," according to the release notes. "These apps come from a small set of curated developers."

The app suggestions are enabled by default, but you can restore your previously pristine Windows experience if you've installed the update, fortunately. To do so, go into Settings and select Personalization > Start and switch the "Show recommendations for tips, app promotions and more" toggle to "off."

The new "feature" arrives just weeks after it appeared as an Insider beta, showing how quickly Microsoft can implement things when it wants to. It certainly wasn't enough time to receive the kind of user feedback the Insider program is designed for.

The update is bound to rub customers the wrong way, considering that Windows 11 starts at $139 for the Home version. While removing it isn't a huge deal, it may also remind folks of the needless time they spent stripping bloatware from OEM Windows installations. Microsoft previously tested ads in the Windows 11 File Explorer, but ended the experiment shortly afterward.

This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.

Argentina celebrates first quarterly budget surplus in 16 years

www.wionews.com - Comments

Argentina’s President Javier Milei in a televised address from the presidential palace in Buenos Aires, announced the country’s first quarterly fiscal surplus since 2008, Bloomberg News reported.

While economic challenges continue, Milei said that he would continue to maintain fiscal discipline, saying that the surplus marks a crucial moment in Argentina's pursuit of prosperity.

"The fiscal surplus is the cornerstone from which we are building a new era of prosperity in Argentina," Bloomberg quoted Milei as saying.

According to him, Argentina recorded a quarterly fiscal surplus equivalent to 0.2 per cent of gross domestic product (GDP) at the outset of the year, accompanied by a third consecutive monthly surplus in March.

The announcement sparked optimism among investors, with Argentina bonds seeing a surge in value, driving gains across emerging markets.

According to Bloomberg, Diego Ferro, the founder of M2M Capital in New York, lauded Argentina as a favourable investment opportunity following Milei's address, attributing the positive market response to the country's fiscal achievements.

However, Ferro cautioned that continued structural reforms are essential to ensure the permanence of Argentina's fiscal stability, warning against reliance on short-term measures.

Milei attributed Argentina's rare fiscal surplus to strict measures implemented by his administration, including heavy cuts to transfers to provincial governments and a major reduction in public works expenditure.

The government also adopted cost-cutting measures while inflation rates increased, allowing nearly 300 per cent annual inflation to erode real public spending on wages and pensions.

While Milei's approach has yielded positive fiscal outcomes, Adriana Dupita, deputy chief emerging markets economist with Bloomberg Economics, raised concerns about the sustainability of the strategy.

Dupita highlighted the adverse impact of inflation on public sector salaries and pensions, cautioning against the prolonged erosion of purchasing power.

Since assuming office, Milei has embarked on a series of bold economic reforms to revive Argentina's economy.

These reforms include currency devaluation, restructuring of government ministries, deregulation of prices, and gradual reduction of energy and transport subsidies.

These measures have contributed to a gradual slowdown in monthly inflation rates, marking initial progress towards stabilising the economy.

In his televised address, Milei assured Argentines that their sacrifices would yield tangible benefits, promising a future characterised by reduced tax burdens and enhanced economic prosperity.

As Argentina celebrates its first quarterly fiscal surplus in over a decade, Milei's administration faces mounting pressure to sustain momentum and implement long-term structural reforms.

(With inputs from Bloomberg)

author

Shashwat Sankranti

Breaking and writing stories for WION’s business desk. A literature nerd, closeted poet and a novelist (in the making). 

Apple Releases Open Source AI Models That Run On-Device

www.macrumors.com - Comments

Apple today released several open source large language models (LLMs) that are designed to run on-device rather than through cloud servers. Called OpenELM (Open-source Efficient Language Models), the LLMs are available on the Hugging Face Hub, a community for sharing AI code.

Apple Silicon AI Optimized Feature Siri
As outlined in a white paper [PDF], there are eight total OpenELM models, four of which were pre-trained using the CoreNet library, and four instruction tuned models. Apple uses a layer-wise scaling strategy that is aimed at improving accuracy and efficiency.

Apple provided code, training logs, and multiple versions rather than just the final trained model, and the researchers behind the project hope that it will lead to faster progress and "more trustworthy results" in the natural language AI field.

OpenELM, a state-of-the-art open language model. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. For example, with a parameter budget of approximately one billion parameters, OpenELM exhibits a 2.36% improvement in accuracy compared to OLMo while requiring 2x fewer pre-training tokens.

Diverging from prior practices that only provide model weights and inference code, and pre-train on private datasets, our release includes the complete framework for training and evaluation of the language model on publicly available datasets, including training logs, multiple checkpoints, and pre-training configurations.

Apple says that it is releasing the OpenELM models to "empower and enrich the open research community" with stage-of-the-art language models. Sharing open source models gives researchers a way to investigate risks and data and model biases. Developers and companies are able to use the models as-is or make modifications.

The open sharing of information has become an important tool for Apple to recruit top engineers, scientists, and experts because it provides opportunities for research papers that would not normally have been able to be published under Apple's secretive policies.

Apple has not yet brought these kinds of AI capabilities to its devices, but iOS 18 is expected to include a number of new AI features, and rumors suggest that Apple is planning to run its large language models on-device for privacy purposes.

Airlines required to refund passengers for canceled, delayed flights

abcnews.go.com - Comments

Good news for airline travelers: the Department of Transportation on Wednesday announced it is rolling out new rules that will require airlines to automatically give cash refunds to passengers for canceled and significantly delayed flights.

"This is a big day for America's flying public," said Transportation Secretary Pete Buttigieg at a Wednesday morning news conference. Buttigieg said the new rules -- which require prompt refunds -- are the biggest expansion of passenger rights in the department's history.

Airlines can no longer decide how long a delay must be before a refund is issued. Under the new DOT rules, the delays covered would be more than three hours for domestic flights and more than six hours for international flights, the agency said.

This includes tickets purchased directly from airlines, travel agents and third-party sites such as Expedia and Travelocity.

The DOT rules lay out that passengers will be "entitled to a refund if their flight is canceled or significantly changed, and they do not accept alternative transportation or travel credits offered."

PHOTO: A person walks through the terminal as planes remain at gates at Ronald Reagan Washington National Airport in Arlington, Va., Wednesday, Jan. 11, 2023.

A person walks through the terminal as planes remain at gates at Ronald Reagan Washington National Airport in Arlington, Va., Wednesday, Jan. 11, 2023.

Patrick Semansky/AP, FILE

DOT will also require airlines to give cash refunds if your bags are lost and not delivered within 12 hours.

The refunds must be issued within seven days, according to the new DOT rules, and must be in cash unless the passenger chooses another form of compensation. Airlines can no longer issue refunds in forms of vouchers or credits when consumers are entitled to receive cash.

Airlines will have six months to comply with the new rules.

PHOTO: U.S. Secretary of Transportation Pete Buttigieg speaks at a press conference at the Reagan National Airport on April 24, 2024.

U.S. Secretary of Transportation Pete Buttigieg speaks at a press conference at the Reagan National Airport on April 24, 2024.

ABC News, POOL

"Passengers deserve to get their money back when an airline owes them -- without headaches or haggling," Buttigieg said in a statement.

The DOT said it is also working on rules related to family seating fees, enhancing rights for wheelchair-traveling passengers for safe and dignified travel and mandating compensation and amenities if flights are delayed or canceled by airlines.

Buttigieg said the DOT is also protecting airline passengers from being surprised by hidden fees -- a move he estimates will have Americans billions of dollars every year.

The DOT rules include that passengers will receive refunds for extra services paid for and not provided, such as Wi-Fi, seat selection or inflight entertainment.

The rules come after the agency handed Southwest Airlines a record $140 million fine for its operational meltdown during the 2022 holiday travel season.

Buttigieg said Southwest's fine sets a "new standard" for airlines and passenger rights.

"To be clear, we want the airline sector to thrive. It is why we put so much into helping them survive the pandemic and honestly it's why we're being so rigorous on passenger protection," he said.

Buttigieg reiterated that refund requirements are already the standard for airlines, but the new DOT rules hold the airlines to account and makes sure passengers get the "refunds that are owed to them."

"Airlines are not enthusiastic about us holding them to a higher standard," Buttigieg said, adding that he "knows they will be able to adapt to this."

Airlines for America, the trade association for the country's leading passenger and cargo airlines, told ABC News in a statement that its members "offer a range of options -- including fully refundable fares." Is said consumers are "given the choice of refundable ticket options with terms and conditions that best fit their needs at first search results."

The group said the 11 largest U.S. airlines issued $43 billion in customer refunds from 2020 through 2023 nearly $11 billion in refunds just last year.

Bicycle use now exceeds car use in Paris

english.elpais.com - Comments

It’s rush hour on Rue de Rivoli, one of the main arteries of the French capital. The bicycles pass one after another in quick succession, ringing their bell when a pedestrian crosses without looking. Five years ago, it was cars that monopolized this three-kilometer axis that runs in front of Paris City Hall and the Louvre Museum. Not anymore. Two-wheel transportation has prevailed, favored by a paradigm shift in urban mobility. The cycling revolution, promoted by local authorities, is beginning to bear fruit: according to a recent study by the Paris Région Institute, a public agency, bicycles already surpass cars as a means of transportation in the interior of Paris, accounting for 11.2% of trips compared to 4.3%. A similar trend is seen in trips between the suburbs and the city center: 14% are made by bicycle and 11.8% by car.

Rue de Rivoli, with its two-way cycle lanes and its dedicated lane for buses and taxis, is perhaps one of the most emblematic examples of the change that the city has experienced in recent years. But it’s not the only one. The perpendicular Boulevard de Sébastopol has become the route most frequented by cyclists, with figures that usually exceed 10,000 daily trips, according to the count kept by the association Paris en Selle.

When it is sunny, the density can be so high that traffic jams sometimes occur, and the narrowness of the lane causes friction between bikes, creating moments of tension. City officials led by Mayor Anne Hidalgo, a Socialist, have tried to remedy this situation by building other bike lanes on parallel streets.

From north to south and east to west, the map of the capital has been filled with infrastructure that gives the bicycle a privileged place. Paris has more than 1,000 kilometers (621 miles) of facilities adapted for cyclists, including more than 300 km (186 m) of bike lanes and 52 km (32 m) of provisional lanes, according to the latest available municipal data, from 2021. The rest are lanes shared with cars or lanes only marked with paint on the ground.

By 2026, local officials want the entire city to be suitable for two-wheel transportation. To this end, it has set aside $250 million, $100 million more than in Hidalgo’s first term. This summer’s Olympic Games will serve as an accelerator of this new “bike plan,” with routes that will allow access to the Olympic venues.

But there is still some way to go. The Paris en Selle association warns that only 27% of the “bike plan” has been carried out despite the fact that 62% of Hidalgo’s second term in office has already elapsed. The Deputy Mayor of Paris for Transportation, David Belliard, acknowledges that there are delays, but does not lose hope. Progress is noticeable.

In some thoroughfares, the number of bikes already surpasses vehicles. Between 2022 and 2023, the use of bike lanes doubled at peak times, according to data collected by the capital’s 128 counters. The goal is to create a network of cycling paths that run along the busiest metro lines, to unclog public transit and offer an equally fast and safe alternative for commuters.

The number of people who travel by bicycle has increased exponentially. Vélib, the municipal urban bicycle rental service, has increased its fleet with 3,000 new bikes since March. Edmée Doroszlai, a 62-year-old Parisian, still remembers the first time she started riding on two wheels in the early 1980s. “It was monstrous, almost impossible and very dangerous,” she says from the center of Paris, with her bike at her side.

“There is also a big change in how men behave when they see a woman on a bike,” she adds, alluding to the normalization of its use. The presence of adapted infrastructure, she confirms, has encouraged her to use it more, as have many families who travel on cycle paths with small children.

“We still have to go further,” Belliard insisted in an interview with BFMTV earlier this month. The councilor was reacting to the study by the Paris Région Institute, the regional urban planning and environment agency, which indicated that 11.2% of trips in Paris were made by bike between 2022 and 2023, compared to 4.3% by car. The change in trend is clear. In 2021, two wheels still represented 5.6% of trips, while cars were 9%, according to Belliard.

Notre DameBike path on the banks of the Seine, in Paris.Ciclistas y 'pic-nics' en la orilla sur del Sena, en París (Jon Hicks)

In addition to surpassing the car as a means of travel within Paris, the research indicates that residents of the nearest suburbs also prefer to use the bike, with 14% of trips compared to 11.8% for cars. The figures are even better during rush hour, when 18.9% of trips are made by bike and only 6.6% by car. Travel on foot, however, continues to lead mobility within the municipality with 53%, followed by those made on public transit, with 30%. The study was carried out with 3,337 residents of the capital region who agreed to be fitted with a GPS tracker.

The bike gradually gained popularity during the public transist strike that paralyzed the capital in December 2019, in protest of President Emmanuel Macron’s pension reform. But it was also prominent after the Covid confinement in 2020, when the city tested the so-called “coronapistes,” temporary cycling lanes that progressively became permanent. Like Rivoli’s.

Better connections with neighborhoods

“The network is very good,” says Arnaud Faure, 31, co-owner of the Bivouac Cycles bicycle repair shop in Saint Ouen, a banlieue (suburb) in the north of the city. He has been in the French capital for two years and every day he travels 13 km (8 miles) to get to work and again the same to get back home. He says that almost all of his journey is along bike paths. But he cites two drawbacks. On one hand, the lack of safe parking, a determining factor for bicycle use. On the other hand, the fact that “just like in big cities, traffic is dense and can sometimes be dangerous.”

In 12 years, car traffic has decreased by 40% in Paris, according to City Hall. “But these rapid changes in habits have been accompanied by tension” in the streets, the mayor has admitted. “It takes time for everyone to find their place and feel safe,” she added, following road regulations that seek to raise awareness about the shared use of public space. Last summer, posters appeared throughout the city reminding everyone that pedestrians have the priority and that the speed limit for cars is 30 km/h (18 mph).

The city’s plan includes increasing the number of parking spaces for bicycles. The goal is to build more than 130,000 new spots. “Parking at train stations must be developed on a massive scale,” stresses Aymeric Cotard, 29, a member of the association Mieux se déplacer à bicyclette [Better to get around by bike]. One of the large projects that should be completed this year, with 1,200 spaces, is located just behind the Gare du Nord, one of the busiest stations in France. For Cotard, however, it will be insufficient. In the Dutch city of Utrecht, the station has 12,500 spaces for bikes.

The idea is that people who live in the suburbs and take the train daily to work will also use the bicycle once they arrive in Paris. It is one of the main challenges of the coming years, along with facilitating continuous journeys between the capital and its suburbs. “This requires the banlieue cities to do their job and the city of Paris to also improve its entrances, which are inhospitable and unpleasant by bike,” warns Cotard. In addition, it is necessary to provide infrastructure for a flow of cyclists that will be even greater in the future.

The process takes time and has encountered some opposition. But the morphology of the city is changing, adapting to the bike. And, with it, its resilience to the effects of climate change.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

Amsterdam roofs that not only grow plants but also capture water for residents

www.wired.com - Comments

Glass Health (YC W23) is hiring founding, senior and lead full-stack engineers

jobs.lever.co - Comments

Magic Numbers

exple.tive.org - Comments

April 24, 2024

The Maximum Transmission Unit – MTU – of an Ethernet frame is 1500 bytes.

1500 bytes is a bit out there as numbers go, or at least it seems that way if you touch computers for a living. It’s not a power of two or anywhere close, it’s suspiciously base-ten-round, and computers don’t care all that much about base ten, so how did we get here?

Well, today I learned that the size of an Ethernet header – 36 bytes – comes from the fact that MTU plus Ethernet header is 1536 bytes, which is 12288 bits, which takes 2^12 microseconds to transmit at 3Mb/second, because the Xerox Alto computer for which Ethernet was invented had a internal data path that ran at 3Mhz, so the interface could just write the bits into the Alto’s memory at the precise speed at which they arrived, saving the very-expensive-then cost of extra silicon for an interface or any buffering hardware.

Now, “we need to pick just the right magic number here so we can take data straight off the wire and blow it directly into the memory of this specific machine over there” is to any modern sensibilities insane. It’s obviously, dangerously insane. But back when the idea of network security didn’t exist because computers barely existed and networks mostly didn’t exist and unvetted and unsanctioned access to those networks definitely didn’t exist, I bet it seemed like a very reasonable tradeoff.

It really is amazing how many of the things we sort of ambiently accept as standards today, if we even realize we’re making that decision at all, are what they are only because some now-esoteric property of the now-esoteric hardware on which the tech was first invented let the inventors save a few bucks.


IBM to buy HashiCorp in $6.4B deal

www.reuters.com - Comments

Pixiv Blocks Adult Content, but Only for US and UK Users

www.404media.co - Comments

A Japan-based online art platform is banning kink content for users based in the US and UK, as laws in these countries continue to tighten around sites that allow erotic content. 

Pixiv is an image gallery site where artists primarily share illustrations, manga, and novels. The site announced on April 22 that starting April 25, users whose account region is set to the US or UK will be subject to Pixiv’s new terms of use, “Restrictions for Healthy Expression in Specific Countries and Regions.”

The restrictions include several kinds of content that are illegal in the US, including sexualized depictions of minors and bestiality, as well as non-consensual depictions and deepfakes. But it also includes “content that appeals to the prurient interest, is patently offensive in light of community standards where you are located or where such content may be accessed or distributed, lacks serious literary, artistic, political, or scientific value, or otherwise violates any applicable obscenity laws, rules or regulations.” This is an invocation of the Miller test, which determines non-constitutionally protected obscenity.

In the US, legislation is spreading rapidly that would levy heavy fines against sites that contain more than one third (and in some states, one quarter) sexual content if they don’t verify all users’ ages

Niche Gamer, a gaming industry blog, first reported the Pixiv change. In 2022, it reported that Pixiv changed its terms for its Pixiv Fanbox platform—which allowed users to sell their content—forbidding users from selling content that contains bestiality, sexual exploitation of a minor, incest, rape, and non-consensual mutilation. 

Artist platforms with rules against specific kinds of illegal acts often expand moderation of those rules beyond those kinds of content, and end up catching legal kink content creators in their nets. An artist who brought the latest terms of use change to my attention who goes by kradeelav told me that she’s watched artist peers get kicked off Gumroad, Twitter, Deviantart, Tumblr, Discord, and dozens of other sites in the last five years. This seems like a “strong bellwether for how jittery international companies feel about legal visual art in the US,” she told me.

“I'd never say this a few years ago, but it's my personal fear that the next step is most major internet hosting services implementing these policies on an infrastructure level,” kradeelav said. “My colleagues are certainly planning for it by specifically looking for kink-friendly hosts, to actually making homebrew servers themselves in worst-case scenarios.”

In March, online marketplace Gumroad announced that it would no longer allow most types of adult content on its platform, and selling NSFW works, like “fetish-driven content,” “breast expansion” avatar mods, and adult comics are now among the banned material. Gumroad blamed banking partners as its reason for cracking down on erotic content, and the threat of payment processors leaving platforms has chilled sexual speech across the internet—most notably, when Mastercard and Visa pulled services from Pornhub in 2020.

Eric Schmidt-backed Augment, a GitHub Copilot rival, launches out of stealth

techcrunch.com - Comments

AI is supercharging coding — and developers are embracing it.

In a recent StackOverflow poll, 44% of software engineers said that they use AI tools as part of their development processes now and 26% plan to soon. Gartner estimates that over half of organizations are currently piloting or have already deployed AI-driven coding assistants, and that 75% of developers will use coding assistants in some form by 2028.

Ex-Microsoft software developer Igor Ostrovsky believes that soon, there won’t be a developer who doesn’use AI in their workflows. “Software engineering remains a difficult and all-too-often tedious and frustrating job, particularly at scale,” he told TechCrunch. “AI can improve software quality, team productivity and help restore the joy of programming.”

So Ostrovsky decided to build the AI-powered coding platform that he himself would want to use.

That platform is Augment, and on Wednesday it emerged from stealth with $252 million in funding at a near-unicorn ($977 million) post-money valuation. With investments from former Google CEO Eric Schmidt and VCs including Index Ventures, Sutter Hill Ventures, Lightspeed Venture Partners, Innovation Endeavors and Meritech Capital, Augment aims to shake up the still-nascent market for generative AI coding technologies.

“Most companies are dissatisfied with the programs they produce and consume; software is too often fragile, complex and expensive to maintain with development teams bogged down with long backlogs for feature requests, bug fixes, security patches, integration requests, migrations and upgrades,” Ostrovsky said. “Augment has both the best team and recipe for empowering programmers and their organizations to deliver high-quality software quicker.”

Ostrovsky spent nearly seven years at Microsoft before joining Pure Storage, a startup developing flash data storage hardware and software products, as a founding engineer. While at Microsoft, Ostrovsky worked on components of Midori, a next-generation operating system the company never released but whose concepts have made their way into other Microsoft projects over the last decade.

In 2022, Ostrovsky and Guy Gur-Ari, previously an AI research scientist at Google, teamed up to create Augment’s MVP. To fill out the startup’s executive ranks, Ostrovsky and Gur-Ari brought on Scott Dietzen, ex-CEO of Pure Storage, and Dion Almaer, formerly a Google engineering director and a VP of engineering at Shopify.

Augment remains a strangely hush-hush operation.

In our conversation, Ostrovsky wasn’t willing to say much about the user experience or even the generative AI models driving Augment’s features (whatever they may be) — save that Augment is using fine-tuned “industry-leading” open models of some sort.

He did say how Augment plans to make money: standard software-as-a-service subscriptions. Pricing and other details will be revealed later this year, Ostrovsky added, closer to Augment’s planned GA release.

“Our funding provides many years of runway to continue to build what we believe to be the best team in enterprise AI,” he said. “We’re accelerating product development and building out Augment’s product, engineering and go-to-market functions as the company gears up for rapid growth.”

Rapid growth is perhaps the best shot Augment has at making waves in an increasingly cutthroat industry.

Practically every tech giant offers its own version of an AI coding assistant. Microsoft has GitHub Copilot, which is by far the firmest entrenched with over 1.3 million paying individual and 50,000 enterprise customers as of February. Amazon has AWS’ CodeWhisperer. And Google has Gemini Code Assist, recently rebranded from Duet AI for Developers.

Elsewhere, there’s a torrent of coding assistant startups: MagicTabnineCodegen, Refact, TabbyML, Sweep, Laredo and Cognition (which reportedly just raised $175 million), to name a few. Harness and JetBrains, which developed the Kotlin programming language, recently released their own. So did Sentry (albeit with more of a cybersecurity bent). 

Can they all — plus Augment now — do business harmoniously together? It seems unlikely. Eye-watering compute costs alone make the AI coding assistant business a challenging one to maintain. Overruns related to training and serving models forced generative AI coding startup Kite to shut down in December 2022. Even Copilot loses money, to the tune of around $20 to $80 a month per user, according to The Wall Street Journal.

Ostrovsky implies that there’s momentum behind Augment already; he claims that “hundreds” of software developers across “dozens” of companies including payment startup Keeta (which is also Eric Schmidt-backed) are using Augment in early access. But will the uptake sustain? That’s the million-dollar question, indeed.

I also wonder if Augment has made any steps toward solving the technical setbacks plaguing code-generating AI, particularly around vulnerabilities.

An analysis by GitClear, the developer of the code analytics tool of the same name, found that coding assistants are resulting in more mistaken code being pushed to codebases, creating headaches for software maintainers. Security researchers have warned that generative coding tools tools can amplify existing bugs and exploits in projects. And Stanford researchers have found that developers who accept code recommendations from AI assistants tend to produce less secure code.

Then there’s copyright to worry about.

Augment’s models were undoubtedly trained on publicly available data, like all generative AI models — some of which may’ve been copyrighted or under a restrictive license. Some vendors have argued that fair use doctrine shields them from copyright claims while at the same time rolling out tools to mitigate potential infringement. But that hasn’t stopped coders from filing class action lawsuits over what they allege are open licensing and IP violations.

To all this, Ostrovsky says: “Current AI coding assistants don’t adequately understand the programmer’s intent, improve software quality nor facilitate team productivity, and they don’t properly protect intellectual property. Augment’s engineering team boasts deep AI and systems expertise. We’re poised to bring AI coding assistance innovations to developers and software teams.”

Augment, which is based in Palo Alto, has around 50 employees; Ostrovsky expects that number to double by the end of the year.

The Rise and Fall of the LAN Party

aftermath.site - Comments

Today it is trivially easy to play games on a computer with one’s friends over the internet. I can log into a game like Fortnite, party up with a squad and chat in either the game’s built-in voice protocol or use another service like Discord, and be in a game within minutes. I can do this from my computer, a game console, or even my phone. But before the wide availability of high-speed internet, things were more complicated.


The following is excerpted from the book LAN Party, by Merritt K. The book is available for purchase now.


In the 1990s and early 2000s, three-dimensional graphics in videogames were becoming more and more complex. Titles like 1998’s Half-Life pushed games in more cinematic directions, with lighting and textures that went beyond anything released even a few years earlier. Other first-person shooters (FPS) like Counter-Strike (itself originally a mod for Half-Life) and Unreal Tournament built on the work of earlier titles like DOOM, Wolfenstein 3D, and Duke Nukem 3D. Many of these titles were designed for multiplayer action. However, the typically low network speeds of the period meant that these games, unlike slower-paced and less graphically intensive strategy games, were nearly unplayable over an internet connection. In this moment, in which communications technology was being outpaced by graphical power, the LAN (local area network) party was born.

The term itself conjures up strong sensory memories for those who were there—sweaty bodies packed into a basement or convention hall, a dozen CPUs noticeably warming the space, the heft of a CRT monitor being maneuvered into position. For those on the outside, these were scenes of incomprehension or ridicule. But for those who were there, the LAN party was a singular event, a defining social occasion of the early 21st century. It represented the last gasps of the isolated gamer stereotype, ushering in an age in which gaming was not only mainstream, but a social, networked activity.

Of course, people had been bringing together computers for some time prior to the Y2K era. (The demoparty, in which participants cracked code to evade copyright protection and share artistic creations, was an important antecedent to the LAN party.) But it was in this particular period—in the United States, at least—that the social and technological configuration of the LAN party became a true phenomenon. Participants hauled their monitors, towers, and peripherals to a central location, where they would set up their machines and connect them through a network switch. This local connection enabled speeds far beyond those available to the average internet user, enabling lag-free gameplay, not to mention high-speed file sharing at a time when downloading or transporting large files could be an extremely onerous task.

LAN parties ranged from small, private gatherings to massive, multi-day events with thousands of participants, such as QuakeCon, DreamHack, The Gathering, and Euskal Encounter. Both types are represented in this book, though the focus is more on the former. As accessible digital photography was emerging around the same time as LAN parties—and perhaps because computer enthusiasts were more likely than the general population to own gadgets like digital cameras—these events are extraordinarily well documented.

Gaming at the Turn of the Millennium

What do these photos show? Young people—primarily young men—goofing off and playing games, of course. But it’s more than that. Technological and cultural artifacts of the era are strewn throughout, illustrating trends, obsessions, and now-forgotten relics. One of my favorite photos in the book depicts, among other things: a Windows XP error dialogue box; a beige Microsoft keyboard; a disposable film camera; a pair of wraparound headphones that I and nearly everyone else I knew owned in the early 2000s; and a pile of burned CD-Rs, one of which has “StarCraft” written on it in permanent marker. Junk foods and caffeinated beverages appear frequently in the collection, with the energy drink Bawls Guarana in particular popping up again and again. While Mountain Dew has since acquired a reputation as the gamer beverage of choice, Bawls was certainly the unofficial sponsoring drink of the LAN party.

Some games feature prominently in the mythos of the LAN party and in the photos collected in this book. The aforementioned Counter-Strike and Unreal Tournament are two of them, being primarily team-based first-person shooters that laid the groundwork for the ongoing popularity of the genre. These games are best played with minimum latency; they each support large numbers of players and feature quick rounds, which made them big hits at LAN parties. Certain maps in these games have become iconic, celebrated and recreated in other titles—for Counter-Strike, Dust II (de_dust2) is probably the best-known, while for Unreal Tournament, it’s Facing Worlds (CTF-Face). Both of these maps are so significant, so well-remembered and influential that they have their own Wikipedia pages.

Other first-person shooters popular at turn-of-the-century LAN parties include Starsiege: Tribes, Tom Clancy’s Rainbow Six: Rogue Spear, and id’s Quake series. Quake III Arena was released in 1999 and eschewed a single-player narrative component, instead focusing on highspeed multiplayer battles. The engine developed for the game was later used for a number of other successful games, including the awkwardly titled Star Wars Jedi Knight II: Jedi Outcast, which contained a robust multiplayer mode that players built on by creating elaborate rituals around lightsaber duels.

Of course, not all of the games played at LAN parties were first-person shooters. Real-time strategy (RTS) games were also quite popular in the early 2000s, with Blizzard’s StarCraft (1998) and Warcraft III : Reign of Chaos (2002) celebrated for their intricate design, customizability, and multiplayer capabilities. These games, like many 3FPS games of the era, came with tools that made it easy for players to create their own content. This led to a boom in hobbyist developer creativity that in turn generated entirely new genres of videogames such as the multiplayer online battle arena (MOBA), later refined by immensely successful titles like League of Legends and Dota 2. Other well-loved RTS games of the era include Westwood’s Command & Conquer franchise, Ensemble’s Age of Empires series, and Creative Assembly’s Total War titles.

When it came to console gaming in the Y2K era, the Nintendo 64 set a new standard for multiplayer games with the introduction of four controller ports in 1996, and most subsequent machines followed its lead. Microsoft released the original Xbox in 2001, and its launch title, Halo: Combat Evolved, kicked off a new generation of console-based first-person shooters. In addition to featuring split-screen multiplayer, the Xbox supported a form of LAN play called System Link, which allowed up to sixteen players to play games like Halo simultaneously. The Halo series and Xbox also happened to be instrumental in the decline of the LAN party—more on that later.

Pg 26: © Erwin de Gier Amsterdam (The Netherlands), 1998
Y2K Cultural Trends

Beyond games, LAN party photos also demonstrate some other cultural trends of the period. The late 90s and early 2000s saw the rise of the nu metal musical genre, which included artists like Limp Bizkit, Slipknot, Korn, and Linkin Park. These groups harnessed feelings of isolation and teenage angst and fused rock instrumentation with hip-hop style and delivery, creating a kind of music that was beloved and reviled in equal measure for its direct, unselfconscious emotional pleas, macho posturing, and nihilistic themes.

Simultaneously, anime and Japanese subcultures were becoming more popular in the US due to the introduction of shows like Dragon Ball Z and Sailor Moon on American children’s networks. The growth of the internet, too, was making it easier than ever for young people interested in anime and other niche topics to share their interests and learn more about them on message boards and webrings. Anime and nu metal often went together in the form of the animated music video (AMV), where fans would stitch together clips of their favorite shows as makeshift music videos for their favorite angsty tracks.

The influence of anime and nu metal, as well as the mainstreaming of hip-hop to white suburban audiences, the dark guns-and-leather aesthetics of the films Blade and The Matrix, skater culture, and more can be seen in many of the photos in this book—in the clothing people are wearing, the posters on their walls, and desktop backgrounds. What has today become massively mainstream—anime, gaming, comic books, and so on—was, in the early 2000s, still on the fringes of normalcy. Remember: Iron Man didn’t kick off the Marvel Cinematic Universe until 2008. Crunchyroll, the anime streaming platform, didn’t exist until 2006.

In the same vein, this period also saw the birth of meme culture online. Early internet memes like “Mr. T Ate My Balls,” “All your base are belong to us,” and l33tspeak spread through forums like Something Awful and Flash portals such as Newgrounds, giving young internet users a kind of shared secret language. In the late 2000s, as social networks like Facebook gained traction among college students and more and more people got online, meme culture gradually became mass culture.

Creative Chaos

In addition to the cultural trends of the time, these pictures also show people bringing computers into places where they didn’t traditionally belong. In the 1990s and early 2000s, bulky desktop computers often lived in home offices or even dedicated “computer rooms.” Some lucky few kids at that time had their own personal computers in their bedrooms, but in my experience, this was rare.

During LAN parties, participants brought computers into garages, basements, living rooms, and other spaces, setting them up on dining-room tables, TV trays, kitchen counters, and any available surface. The raw excitement on the part of the participants is evident in the sometimes absurd lengths they went to in order to participate in LAN parties—computer towers crammed between cushions in the back seat of a van to ensure their safe transportation across town; cables crisscrossing the floor to connect machines; CRT monitors balanced haphazardly around the room.

It’s this passion, I think, which partly explains the appeal of these photos—even to those who weren’t around at the time. This book is full of images of people being truly excited about computers and playing games on them. There’s a sense, in looking at these photos, that these people were on the cusp of something—even if they weren’t necessarily aware of it at the time. Since the home computer boom of the 1990s and the introduction of high-speed internet in the 2000s, the omnipresence of computers and communication technology has rendered them mundane to many people. It’s almost quaint to see people so genuinely thrilled to be playing PC games with their friends when, today, doing so is an everyday occurrence.

Making a LAN party happen took work. It took physical effort, technical know-how, and a willingness to hack things together. The range of computer equipment depicted in the photos is testament to that. Yes, there are the standard massive, beige CRT monitors associated with the period, but we see computer towers ranging from stock models in the same color to complex monstrosities built by enthusiastic geeks. This was before Apple’s sleek industrial design took over the tech world, before LED lights were standard on pretty much any gaming PC. It was the era of user-driven customization, and LAN parties are perfectly emblematic of that time.

The Decline Of The LAN Party

LAN parties occurred throughout the 1990s, peaked (in the US, at least) in the early-mid-2000s and began to decline in the early 2010s. Of course, people do still throw LAN parties, especially people who grew up with them, but their heyday has long since passed. So what killed the LAN party? The most obvious answer is the widespread introduction of communication infrastructure that made it possible to play games like first-person shooters over the internet with low latency.

LAN parties were a creation of circumstance, which withered away once it was no longer a necessity to transport computers to the same physical space in order to get an ideal gaming experience. It’s certainly true that it is now more convenient than ever for many people to play games online with strangers or friends. But convenience in technology often comes with a commensurate loss of control on the part of users.

In 2004, Bungie released Halo 2 for the Xbox. The game built on the original’s landmark success by introducing online play through Microsoft’s Xbox Live service. It went on to become the most popular Xbox Live title of all time and was played until the discontinuation of the service on the original Xbox in 2010.

Pgs 70-71: © Kiel Oleson/Electronox Lee’s Summit, MO (USA), 2002

The Xbox Live service was easy to use, popularizing the method of allowing players to party up with their friends and enter into matchmaking queues. This was a major shift from the then prevalent system of presenting players with a list of servers to join, each hosted by different players and groups. As well as being a more seamless experience emphasizing ease of use, this new model also represented a move away from user control. Matchmaking is now the dominant mode of playing multiplayer games online. There are certainly advantages inherent in it—developers can try to create fairer matchups based on skill and players can keep their ranks and reputations across games—but it also puts players at the mercy of a company’s algorithms and servers.

Many of the most popular multiplayer videogames today are entirely server-side, meaning that they cannot be played without connecting to the game’s servers. This is advantageous for developers and publishers, who can ensure that players have the latest updates, prevent cheating, and create large worlds in which numerous players can be online and interacting at once. But it also means that control of game experiences has shifted significantly away from users. Even games that have offline components often do not have any kind of peer-to-peer or private server functionality, meaning that it is impossible for a group to play them together in a LAN environment.

The way we buy and play games has also changed. Today, digital platforms like Steam and the Epic Game Store allow players to purchase titles without leaving their homes. But digital copies of games, and their management through these platforms, mean that the old practices of burning copies of games or sharing legal “spawn installations” of games to facilitate multiplayer experiences are less and less possible.

Thus, the story that LAN parties died because they were simply an obsolete social structure is a little too straightforward. It may be true that most people would prefer to play games in the comfort of their own home rather than transporting expensive and bulky equipment elsewhere, but technological and economic forces also contributed to the decline of LAN events. The fact is that the shift to digital, producer-owned environments in every aspect of gaming—from sales to play—tremendously benefits the corporations publishing and selling games, sometimes at the expense of those purchasing and playing them.

Looking Back

In the photos collected in this book, then, we can see some things that have been lost, or at least forgotten—an adventurous spirit around computing and a world in which ownership of software and play belonged more to individuals than corporations. I don’t mean to suggest that LAN parties were utopian spaces. They were, of course, mostly—but certainly not exclusively—attended and organized by young white men, and even many of the larger events were male-dominated spaces, hostile to women. Nonetheless, from my position, decades later, I can’t help but look fondly on images of LAN parties. At a time when communications technology paradoxically seems to produce a sense of disconnection for many people through algorithmically generated echo chambers and the indexing of personal worth to follower counts or likes, seeing people literally coming together with and around computers is almost aspirational.

It’s tempting to see the mainstreaming of gaming and tech as a uniformly positive trend. And certainly, more people having access to these things and feeling like they belong in associated spaces is a good thing. But there are always trade-offs. The ubiquity of, and widespread access to, tech has come with an unprecedented rise in surveillance through our devices, a loss of control over our personal data, and a sense of alienation fostered by tech companies who want to own as much of our attention as possible.

For people like me, who grew up during the 1990s and 2000s, it can sometimes feel like the exciting period of the internet and computing is over. Pictures of LAN parties represent that early era of the internet, when it was a place that you visited rather than a parallel layer of reality. As we’ve watched that mysterious, alluring, and perilous internet get progressively fenced off, paywalled, and centralized by a few massive corporations, some of us are beginning to reflect on our relationship to it.

Perhaps this thing that was so important to us in our youth, that we’ve stubbornly stuck with despite sweeping structural changes, is no longer so relevant to our lives. Maybe it’s time to start figuring out new ways to use the internet and computers to enrich our world. And maybe LAN parties can offer one model for that.

Excerpted from LAN PARTY: Inside the Multiplayer Revolution, by merritt k

Text © 2023 merritt k 

© 2024 Thames & Hudson Ltd, London

Reprinted by permission of Thames & Hudson Inc, www.thamesandhudsonusa.com

Pg 118: © Robert McNeil Brisbane (Australia), 2006

McKinsey Under Criminal Investigation over Opioid-Related Consulting

www.wsj.com - Comments

Nearsightedness is at epidemic levels – and the problem begins in childhood

theconversation.com - Comments

Myopia, or the need for corrected vision to focus or see objects at a distance, has become a lot more common in recent decades. Some even consider myopia, also known as nearsightedness, an epidemic.

Optometry researchers estimate that about half of the global population will need corrective lenses to offset myopia by 2050 if current rates continue – up from 23% in 2000 and less than 10% in some countries.

The associated health care costs are huge. In the United States alone, spending on corrective lenses, eye tests and related expenses may be as high as US$7.2 billion a year.

What explains the rapid growth in myopia?

I’m a vision scientist who has studied visual perception and perceptual defects. To answer that question, first let’s examine what causes myopia – and what reduces it.

A closer look at myopia.

How myopia develops

While having two myopic parents does mean you’re more likely to be nearsighted, there’s no single myopia gene. That means the causes of myopia are more behavioral than genetic.

Optometrists have learned a great deal about the progression of myopia by studying visual development in infant chickens. They do so by putting little helmets on baby chickens. Lenses on the face of the helmet cover the chicks’ eyes and are adjusted to affect how much they see.

Just like in humans, if visual input is distorted, a chick’s eyes grow too large, resulting in myopia. And it’s progressive. Blur leads to eye growth, which causes more blur, which makes the eye grow even larger, and so on.

Two recent studies featuring extensive surveys of children and their parents provide strong support for the idea that an important driver of the uptick in myopia is that people are spending more time focusing on objects immediately in front of our eyes, whether a screen, a book or a drawing pad. The more time we spend focusing on something within arm’s length of our faces, dubbed “near work,” the greater the odds of having myopia.

So as much as people might blame new technologies like smartphones and too much “screen time” for hurting our eyes, the truth is even activities as valuable as reading a good book can affect your eyesight.

Outside light keeps myopia at bay

Other research has shown that this unnatural eye growth can be interrupted by sunlight.

A 2022 study, for example, found that myopia rates were more than four times greater for children who didn’t spend much time outdoors – say, once or twice a week – compared with those who were outside daily. At the same time, kids who spent more than three hours a day while not at school reading or looking at a screen close-up were four times more likely to have myopia than those who spent an hour or less doing so.

In another paper, from 2012, researchers conducted a meta-analysis of seven studies that compared duration of time spent outdoors with myopia incidence. They also found that more time spent outdoors was associated with lower myopia incidence and progression. The odds of developing myopia dropped by 2% for each hour spent outside per week.

Other researchers have reported similar effects and argued for much more time outdoors and changes in early-age schooling to reduce myopia prevalence.

‘Why so many people need glasses now.’

What’s driving the epidemic

That still doesn’t explain why it’s on the rise so rapidly.

Globally, a big part of this is due to the rapid development and industrialization of countries in East Asia over the last 50 years. Around that time, young people began spending more time in classrooms reading and focusing on other objects very close to their eyes and less time outdoors.

This is also what researchers observed in the North American Arctic after World War II, when schooling was mandated for Indigenous people. Myopia rates for Inuit went from the single digits before the 1950s to upwards of 70% by the 1970s as all children began attending schools for the first time.

Countries in Western Europe, North America and Australia have shown increased rates of myopia in recent years but nothing approaching what has been observed recently in China, Japan, Singapore and a few other East Asian countries. The two main factors identified as leading to increased myopia are increased reading and other activities that require focusing on an object close to one’s eyes and a reduction in time spent outdoors.

The surge in myopia cases will likely have its worst effects 40 or 50 years from now because it takes time for the young people being diagnosed with nearsightedness now to experience the most severe vision problems.

Treating myopia

Fortunately, just a few minutes a day with glasses or contact lenses that correct for blur stops the progression of myopia, which is why early vision testing and vision correction are important to limit the development of myopia. Eye checks for children are mandatory in some countries, such as the U.K. and now China, as well as most U.S. states.

People with with high myopia, however, have increased risk of blindness and other severe eye problems, such as retinal detachment, in which the retina pulls away from the the back of the eye. The chances of myopia-related macular degeneration increase by 40% for each diopter of myopia. A diopter is a unit of measurement used in eye prescriptions.

But there appear to be two sure-fire ways to offset or delay these effects: Spend less time focusing on objects close to your face, like books and smartphones, and spend more time outside in the bright, natural light. Given the first one is difficult advice to take in our modern age, the next best thing is taking frequent breaks – or perhaps spend more time reading and scrolling outside in the sun.

I now lack the juice to fuel the bluster to conceal that I am a simpleton

lithub.com - Comments

Become a Lit Hub Supporting Member: Because Books Matter

For the past decade, Literary Hub has brought you the best of the book world for free—no paywall. But our future relies on you. In return for a donation, you’ll get an ad-free reading experience, exclusive editors’ picks, book giveaways, and our coveted Joan Didion Lit Hub tote bag. Most importantly, you’ll keep independent book coverage alive and thriving on the internet.

A feature-rich front-end drag-and-drop component library

github.com - Comments

Fast drag and drop for any experience on any tech stack

License

Additional navigation options

This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Footer

You can’t perform that action at this time.

When do we stop finding new music?

www.statsignificant.com - Comments

Say Anything (1989). Credit: 20th Century Studios.

I recently tried Spotify's new DJ feature in which an AI bot curates personalized listening sessions, introducing songs while explaining the intention behind its selections (much like a real-life disc jockey). Every four or five pieces, the bot interjects to set up its next block of music, ascribing a theme to these upcoming works. Here are some of my example introductions:

  • "Next, we're gonna play some of your favorites from 2016."

  • "Here are some of your favorite indie rock songs from the 2010s."    

  • "Up next, we have some music inspired by your love of 2000s hip-hop."

With each DJ interlude, something became increasingly clear: my music taste had barely changed over the course of a decade. Armed with full knowledge of my musical interests, this AI agent had pinpointed my musical paralysis, packaging an algorithmic echo chamber of 2010s indie rock, 2000s pop, Bo Burnham, Blink-182, and Bruce Springsteen. Had my music taste stagnated?     

This minor existential tailspin sent me down a Google rabbit hole—I began frantically researching music paralysis and the science of sonic preference. Was this phenomenon of my own doing or a natural product of aging? Fortunately, the topic of song stagnation has been well-researched, aided by the robust datasets of streaming services. 

So today, we'll explore how our relationship to music changes with age and the developmental phenomena driving our forever-shifting cultural tastes.

Open-earedness refers to an individual's desire and ability to listen and consider different sounds and musical styling. Research has shown that adolescents exhibit higher levels of open-earedness, with a greater willingness to explore and appreciate diverse musical genres. During these years of sonic exploration, music gets wrapped up in the emotion and identity formation of youth; as a result, the songs of our childhood prove wildly influential over our lifelong music tastes.

A New York Times analysis of Spotify data revealed that our most-played songs often stem from our teenage years, particularly between the ages of 13 and 16.

This finding has personal resonance, as I remember my cultural preferences being easily influenced during my pre-teen and early teenage years. For instance, I was twelve when Green Day released their landmark "American Idiot" album, a work that proved monumental in my relationship to music. Listening to the album's titular track felt like a supreme act of rebellion (for a twelve-year-old suburbanite). I was entranced by this song's iconoclastic spirit—could they actually say, "f**k America?"      

American Idiot Music Video: Credit: Reprise Records.

But "American Idiot" wasn't a true act of revolution. In fact, the album was produced and promoted by a multinational conglomerate with the intent of packaging seemingly transgressive pop-punk acts for my exact demographic. How was I so thoroughly seduced by this song? And yet, to this day, my visceral reaction to “American Idiot” is still one of euphoria, despite my cynicism. I guess I have no choice but to love this song forever (thanks to pre-teen me). 

Indeed, YouGov survey data indicates a strong bias toward music from our teenage years, a phenomenon that is consistent across generations. Every cohort believes that music was "better back in my day."  

Ultimately, cultural preferences are subject to generational relativism, heavily rooted in the media of our adolescence. It's strange how much your 13-year-old self defines your lifelong artistic tastes. At this age, we're unable to drive, vote, drink alcohol, or pay taxes, yet we're old enough to cultivate enduring musical preferences. 

The pervasive nature of music paralysis across generations suggests that the phenomenon's roots go beyond technology, likely stemming from developmental factors. So what changes as we age, and when does open-eardness decline?

Survey research from European streaming service Deezer indicates that music discovery peaks at 24, with survey respondents reporting increased variety in their music rotation during this time. However, after this age, our ability to keep up with music trends typically declines, with respondents reporting significantly lower levels of discovery in their early thirties. Ultimately, the Deezer study pinpoints 31 as the age when musical tastes start to stagnate.

These findings have been replicated across numerous analyses, including a study of Spotify user data from 2014. Produced from Spotify's internal dataset, this research explores how tastes deviate from the mainstream with age. In this analysis, a contemporary pop star like Dua Lipa would score a 1 (the most popular), and an artist further out of the zeitgeist like Led Zeppelin would rank somewhere in the 200s. The resulting visual is unnerving as we observe our cultural preferences (quite literally) spiral away from the mainstream as we grow older.

This study identifies 33 as the tipping point for sonic stagnation, an age where artistic taste calcifies, increasingly deviating from contemporary works. But wait, there's more. Spotify data indicates that parents stray from the mainstream at an accelerated rate compared to empty nesters—a sort of "parent tax" on one's cultural relevancy.

But this stagnation goes beyond the popularity of our music selections; it's also the diversity across these works. From 30 onward, we listen to more music outside the mainstream and sample fewer artists during streaming sessions.

Reading these studies proved an existential body blow because I am 31, apparently on the precipice of becoming a musical dinosaur. I like to think I'm special—that my high-minded dedication to culture makes me an exceptionally unique snowflake—but apparently I'm just like everybody else. I turned 30, and now I'm in a musical rut, content to have an AI bot DJ pacify me with the songs of my youth. 

I used to spend hours researching artists, scrutinizing my CD purchases, and, later, my iTunes selections. Musical exploration was an activity in and of itself; songs were more than background noise. Now, I'm stuck listening to James Blunt's "You're Beautiful" for the 1,000th time. What happened to me?

Music paralysis is the product of both biological trends and practical constraints. Deezer survey respondents who identified as being "in a musical rut" cited numerous day-to-day limitations as cause for their stagnation, with the top three reasons being

  1. Overwhelmed by the amount of choice available: 19%

  2. Having a demanding job: 16%

  3. Caring for young children: 11%

This first point regarding the paradox of choice is especially intriguing and would speak to streaming as some sort of societal ill, bombarding us with boundless content. It's easy to condemn Spotify for giving us too many options, but this complaint is likely emblematic of a broader developmental shift. 

Context is critical to cultural discovery. An extensive cross-sectional study regarding musical attitudes and preferences from adolescence through middle age found that our relationship with music drastically changes over time. Surveying over 250,000 individuals, this study found:

  1. The degree of importance attributed to music declines with age, even though adults still consider music important.

  2. Young people listen to music significantly more than middle-aged adults.

  3. Young people listen to music in a wide variety of contexts and settings, whereas adults listen to music primarily in private contexts.

The issue of music discovery does not originate from infinite choice; instead, this problem likely stems from decreased listenership and a waning commitment to exploration. Spending two hours a day combing through iTunes (now Spotify) is impractical. My priorities have changed, my emotional connection to music has changed, and I simply just don't have the time.   

Indeed, this same cross-sectional study revealed that musical preferences are closely related to trends in psychosocial development. In this survey, researchers investigated how tastes vary across five dimensions as we age: intensity, contemporaneous, unpretentiousness, sophistication, and mellowness. The data they collected demonstrates a universality to our forever-changing relationship with music—it's natural to expect a progression in our preferences. 

It's tempting to despair over these results, to accept changing cultural attitudes and the phenomenon of music paralysis as a predetermined truth. At the same time, stagnation is not a certainty. Research suggests that open-eardness and the discovery of new songs can be cultivated. Finding new music is a challenge, but it is achievable with dedicated time and effort. If we avoid the warm complacency of nostalgia, we can recapture our flare for music discovery.

High Fidelity (2000). Credit: Buena Vista Pictures.

My father "likes what he likes": Bruce Springsteen, Field of Dreams, The Washington Nationals, and consistently reminding me that Fleetwood Mac's Rumours was made after its bandmates divorced one another. Whenever I point out my dad's stubborn habits, he'll look at me, smile, and quote the immortal wisdom of Popeye: "I am what I am."  

When I was younger, I strongly disliked this rationale. Surely, there is no fixed version of who we are. Humans are constantly evolving—perpetually engaged in self-discovery. But maybe this isn't the case for all facets of life.   

The explore-exploit trade-off refers to the dilemma between seeking new information (exploring) and optimizing decisions based on known information (exploiting). Some examples of the explore-exploit trade-off include: 

  • Restaurant selection: Do you find a new restaurant or return to your old haunts? 

  • Movies: Do you watch something new or re-watch an all-time favorite?  

  • Career: Should you keep your current job or look for a new one?

In the case of music discovery, exploring would consist of finding new songs and subgenres, while exploiting would entail listening to already-beloved tunes.

The explore-exploit trade-off and an adjacent decision-making puzzle known as the optimal-stopping problem have prompted extensive research and the coining of a shortcut known as the 37% rule. This heuristic suggests we spend the first 37% of available search time exploring our options before settling on a preferred solution or selection.  

In the case of musical preference, the current American lifespan averages 80 years; when we multiply this figure by 37%, we get 30 years—coincidentally, the age at which music tastes stagnate. This back-of-the-envelope math could be interpreted in two ways: 

  1. I am going crazy: I see numbers and symbols that don't mean anything. The 37% rule is a vague heuristic that may not even apply to this case, and I am perceiving order from true randomness.

  2. 30 is our optimal stopping point: Despite the 37% rule being a highly generalized heuristic, there is some merit to doubling down on our favorites after a sustained period of searching—a phenomenon that appears to be our default state. We spend 30 years exploring new music, and once we've sampled enough works, we reach an optimal stopping point, comfortable with our rotation of artists and songs.

Maybe music paralysis is a feature, not a bug. Running on a never-ending treadmill of cultural exploration may be a recipe for discontent. There is nothing inherently wrong with "liking what you like." Is it my waning music discovery that's making me unhappy or the fact that I've yet to accept this reality?

Perhaps I should forsake sonic exploration and exploit my love of "American Idiot," 2010s indie rock, 2000s pop, Bo Burnham, Blink-182, and Bruce Springsteen, content to live in an algorithmic echo chamber curated by DJ—my new AI savior. 

This post is public so feel free to share it.

Share

Want to chat about data and statistics? Have an interesting data project? Just want to say hi? Email [email protected]        

TypeScript: Branded Types

prosopo.io - Comments

PART 1: TypeScript Mapped Type Magic

Ahoy there TypeScript warriors! 👋 Today we're extending our work in the TypeScript mapped types article to provide branding. The previous article discussed how to use TypeScript mapped types in a nominal rather than structural nature.

type A = {
    x: number
}

type B = {
    x: number
}

This is a fancy way of saying TypeScript is structural by default, i.e. it will see type A and B as equal when dealing with types. Making type A and B nominal would make TypeScript differentiate them apart, even though their structure is the same.

In this post, we're building on that work to produce a way to brand a type, providing an automated and easy-to-use way of making a type nominal. Branding focuses on the type system only, rather than introducing runtime fields like in the previous post, which is a major benefit over the previous approach.

What's the problem?

Branding, also known as opaque types, enable differentiation of types in TypeScript which otherwise would be classified as the same type. For example


type A = {
    x: number,
    y: boolean,
    z: string,
}

type B = {
    x: number,
    y: boolean,
    z: string,
}

A and B are structurally the same, ergo TypeScript accepts any instance of A or B in place of each other:


const fn = (a: A) => {
    console.log('do something with A')
}

const obj: B = {
    x: 1,
    y: true,
    z: 'hello'
}

fn(obj) 

The function is looking for a value of type A as input, whereas we're passing it a value of type B. TypeScript compares the types structurally, and because they have exactly the same structure it deems this operation to be fine.

But what if we need to tell A and B apart? What if, conceptually speaking, they must be different? What if we're doing something fancy with A and B which TypeScript is unaware of but we require the types to be different? That's exactly the situation we found ourselves in lately!

We need branding to do exactly that.

The solution

Much like in the TypeScript mapped types article, the key lies in creating a field with the name of a symbol to act as our id. However, with branding we only need this field at a type-level rather than the runtime-level. Since types are erased after compilation, we need to add this field to a type without altering the runtime data whatsoever. Casting, anyone?

First, lets introduce the brand field.


const brand = Symbol('brand') 

type A = {
    x: number,
    y: boolean,
    z: string,
} & {
    [brand]: 'A'
}

Here we're adding the brand field to type A. The brand field name is a symbol, akin to a UUID. We use a symbol to ensure the brand field never clashes with any other field for A, because we'd be overwriting a field otherwise and introducing the worse kind of bugs: type bugs 🐛 . We've set the brand to 'A' at the moment, though this could be anything you desire. It's akin to the type name. Now let's compare A and B again:


const fn = (a: A) => {
    console.log('do something with A')
}

const obj: B = {
    x: 1,
    y: true,
    z: 'hello'
}

fn(obj) 

Here's the error:

Argument of type 'B' is not assignable to parameter of type 'A'.
  Property '[brand]' is missing in type 'B' but required in type '{ [brand]: "a"; }'.ts(2345)

TypeScript won't let us pass an instance of B to the function accepting A because it's missing the brand field - brilliant! A and B are now different types. But what about if B had its own brand?


type B = {
    x: number,
    y: boolean,
    z: string,
} & {
    [brand]: 'B'
}

Note that we're using the same brand variable from before. It's important to keep this constant, otherwise we're declaring fields with different names!

Now lets try the function again:


const fn = (a: A) => {
    console.log('do something with A')
}

const obj: B = {
    x: 1,
    y: true,
    z: 'hello'
}

fn(obj) 

And here's the error

Argument of type 'B' is not assignable to parameter of type 'A'.
  Type 'B' is not assignable to type '{ [brand]: "A"; }'.
    Types of property '[brand]' are incompatible.
      Type '"B"' is not assignable to type '"A"'.ts(2345)

There we go! The error is saying that though both types have a brand field, the value for the brand is different for the two types, i.e. 'A' != 'B'!

Let's see what happens if the brand is the same:



type A = {
    x: number,
    y: boolean,
    z: string,
} & {
    [brand]: 'foobar'
}

type B = {
    x: number,
    y: boolean,
    z: string,
} & {
    [brand]: 'foobar'
}

const fn = (a: A) => {
    console.log('do something with A')
}

const obj: B = {
    x: 1,
    y: true,
    z: 'hello'
}

fn(obj) 

No error! A and B are seen as interchangeable types because they're structurally the same, having the same fields and same brand value of 'foobar'. Excellent!

Make it generic!

Awesome, so that works. But it's a toy example, not fit for production. Let's create a Brand type which can brand any type you wish:

const brand = Symbol('brand') 

type Brand<T, U> = T & {
    [brand]: U
}

This type is very simple, it takes your type T and adds a brand field with U being the brand value. Here's how to use it:


type A_Unbranded = {
    x: number,
    y: boolean,
    z: string,
}

type A = Brand<A_Unbranded, 'A'> 





So now we can brand any type. For completeness, here's the same kind of thing to remove the brand and go back to plain ol' TypeScript types:

type RemoveBrand<T> = T[Exclude<keyof T, typeof brand>]

And this will remove the brand field from any branded type. Also note that if the type is not branded, it will not be touched!

Real world usage

Let's put this into practice. We've got a class which needs branding to identify its type when dealing with mapped types.

For simplicity, lets boil the class down to a Dog class:


class Dog {
    constructor(public name: string) {}
}

type DogBranded = Brand<Dog, 'Dog'>

const dog = new DogBranded('Spot') 

TypeScript won't let us construct a branded dog 😢 . We're going to need to do some casting using the constructor to brand the constructor rather than the class itself.


type Ctor<T> = new (...args: any[]) => T

const addBrand = <T>(ctor: Ctor<T>, name: string) => {
    return ctor as Ctor<Brand<T, typeof name>>
}

const DogBranded = addBrand(Dog, 'Dog')

const dog = new DogBranded('Spot') 

The addBrand function takes a constructor of a class and casts it to a branded type. This essentially makes an alias for the Dog class which can be used in exactly the same way as the Dog class, e.g. calling new on it.

We can export the DogBranded type to allow the outer world to use our class whilst ensuring it's always branded:

export type DogExported = typeof DogBranded

Likewise, we can do the same for brand removal:


const removeBrand = <T>(value: T) => {
    return value as RemoveBrand<T>
}

This simply removes the brand by casting the type to a type mapped without the brand field.

And there we go: a sure-fire way to brand and un-brand your types in TypeScript 😃

We've published this work as a library which you can access via NPM!

At Prosopo, we're using TypeScript branding to fortify our types and do clever type mapping for our soon-to-be-released runtime type validator. Stay tuned for updates!

PART 1: TypeScript Mapped Type Magic


Borrow Checking, RC, GC, and the Eleven () Other Memory Safety Approaches

verdagon.dev - Comments

A fellow named Zeke came into my server one day.

Zeke: "Wait, so with generational references, we now have four ways to do memory safety?"

Evan: "In fact, there are fourteen by my count. Maybe more!" 0

Zeke: "Fourteen?!"

I've gotten so used to it that it's not surprising to me anymore, so it's always a delight to vicariously feel people's surprise when I tell them this.

Evan: "Indeed," and I proceed to show him the grimoire 1 that I've kept secret all these years.

Zeke: "How did you find all these?!"

At this point, I likely told him some nonsense like "I just kept my eyes open and collected them over the years!" but I think that you, my dear reader, deserve to know the truth! 2

This article is the introduction to my secret collection of memory safety techniques, which I call the memory safety grimoire.

With this wisdom, one can see the vast hidden landscape of memory safety, get a hint about where the programming world might head in the next decades, and even design new memory safety approaches. 3

If you like this topic, check out this Developer Voices episode where Kris Jenkins and I talked about linear types and regions!

Borrow checking, RC, GC, and the Eleven (!) Other Memory Safety Approaches

Notes [–] 0 1 2 3
0

And I'd bet that someone on reddit or HN will comment on some I haven't heard before, and I'll have to change the title and add to the list!

1

A grimoire is a cursed spellbook, like the necronomicon.

However, those of weak wills should be careful not to read grimoires... they might end up pursuing the dark arts for years.

2

Or perhaps this entire article is just a clever ruse, the mask behind the mask, and the truth still remains a secret.

The Stainless SDK Generator

www.stainlessapi.com - Comments

Stainless generates the official client libraries for OpenAI, Anthropic, Cloudflare, and more. Today, we’re making the Stainless SDK generator available to every developer with a REST API.

During our private beta, we’ve been able to help millions of developers integrate faster and more reliably with the latest features of some of the world’s most powerful and exciting APIs.

The Stainless SDK generator accepts an OpenAPI specification and uses it to produce quality SDKs in multiple programming languages. As your API evolves, our automated generator continuously pushes changes, ensuring that your SDKs remain up-to-date—even as you make arbitrary custom edits to the generated code.

Here’s a quick, real-world example of the code we can do for you, and how you configure it with Stainless:

TypeScript Go Python


from cloudflare import Cloudflare

client = Cloudflare()
client.zones.create(account={"id": "xxxx"}, name="my zone", type="full")


import Cloudflare from "cloudflare";

async function main() {
  const cloudflare = new Cloudflare();
  const zone = await cloudflare.zones.create({
    account: { id: "xxx" },
    name: "example.com",
    type: "full",
  });
}


package main

import (
    "context"

    "github.com/cloudflare/cloudflare-go/v2"
    "github.com/cloudflare/cloudflare-go/v2/zones"
)

func main() {
    client: = cloudflare.NewClient()
    zone,
    err: = client.Zones.New(context.Background(), zones.ZoneNewParams {
        Account: cloudflare.F(zones.ZoneNewParamsAccount {
            ID: cloudflare.F("023e105f4ecef8ad9ca31a8372d0c353"),
        }),
        Name: cloudflare.F("example.com"),
        Type: cloudflare.F(zones.ZoneNewParamsTypeFull),
    })
}


resources:
  zones:
    methods:
      create: post /v1/zones


See the source code for these endpoints in
TypeScript, Python, and Go.

“The decision to use Stainless has allowed us to move our focus from building the generation engine to instead building high-quality schemas to describe our services.

In the span of a few months, we have gone from inconsistent, manually maintained SDKs to automatically shipping over 1,000 endpoints across three language SDKs with hands-off updates.”

Jacob Bednarz, API Platform Tech Lead, Cloudflare (see blog post)

From the very first day that stripe.com existed on the internet, SDKs were a big part of the pitch to developers. Today, well over 90% of Stripe developers make well over 90% of requests to the Stripe API through the SDKs. As the front door to the API, the SDKs are how developers think about Stripe—to most people, it’s stripe.charges.create(), not POST /v1/charges.

Personally, I hadn’t appreciated this until I joined Stripe in 2017—but by then, the growing scope of the Stripe API had exceeded our capacity to build SDKs by hand sustainably. Making manual changes across 7 different programming languages whenever we shipped a new endpoint was toilsome and error-prone.

Not only that, TypeScript had eaten the world. Now, developers expect typeahead and documentation-on-hover directly in their text editor. Building SDKs that support comprehensive static types in a variety of languages was totally unthinkable without code generation.

The team had spent over a year exploring existing open-source code generation tools before concluding that none could meet Stripe’s quality standards. Most codegen tools either work in only one language or just use string templating, which leaves you constantly fiddling with issues like trailing commas in Go and invalid indentation in Python. We needed to build our own codegen tool that enabled us to “learn once, write everywhere” and easily produce clean, correct code across all our languages.

Over a weekend, I hacked together a surprising mashup of JSX and the internals of prettier to enable product developers to quickly template quality code that came well-formatted out of the box. Over the next several months, dozens of engineers helped convert the SDKs to codegen, matching our carefully handcrafted code byte for byte.

By later that year, I was pairing with a colleague to produce the first official TypeScript definitions for the Stripe API.

After I left Stripe, engineers kept asking me how to build great SDKs for their API. I didn’t have a great answer—most companies don’t have several engineer-years lying around to build whole a suite of high-quality code generators spanning a range of popular languages.

In early 2022, I set out to bootstrap a company, and Lithic became our first customer, making Stainless ramen-profitable from day one. Their head of product had previously been the PM of SDKs at Plaid, where she’d seen firsthand both how valuable SDKs are and how hard it is to codegen decent ones with the openapi-generator.

Despite the allure of a small, bootstrapped company, I felt bad limiting its impact to the small number of clients I could handle by myself. What’s more, I also got asked how to keep OpenAPI specs up-to-date and valid, how to evolve API versions, how to design RESTful pagination, how to set up API Keys, and a million other problems my old team had already solved at Stripe.

Eventually it became clear that the world needed a comprehensive developer platform—from docs to request logs to rate-limiting—that could enable REST to live up to its potential.

Sequoia soon became our first investor, shortly followed by great angels like Cristina Cordova, Guillermo Rauch, Calvin French-Owen, and dozens more.

There was clearly a huge opportunity to advance the whole REST ecosystem and help a ton of people ship better APIs.

APIs are the dendrites of the internet. Literally all internet software connects through APIs—they make up the vast majority of internet traffic.

Today, the API ecosystem is deeply fragmented:

  1. GraphQL. Great for frontends, but not built for server-to-server interactions and hasn’t worked out well for public APIs.
  2. gRPC. Great for microservices, but doesn’t work for frontends and is unpopular for public APIs.
  3. REST. Works for frontends, microservices, and public APIs. It’s simple, flexible, and aligned with web standards—but also messy and hard to get right.

Engineering organizations should be able to use one API technology for everything—their frontends, their microservices, and their public API—and have a good experience everywhere.

At Stainless, rather than trying to fit GraphQL or gRPC into the square holes they weren’t designed for—or invent some new 15th standard—we are staunch believers that “REST done right” can deliver this vision.

We want to build great open-source standards and tooling that bring the benefits of GraphQL (types, field selection/expansion, standards) and gRPC (types, speed, versioning) to REST.

When Stainless REST is realized, you’ll be able to start building a full-stack application using our API layer and have at least as good of a frontend experience as you would have had with GraphQL. When you add a Go microservice, you’ll be able to interconnect with typed clients, efficient packets, and low latency. And then—uniquely—when your biggest customer asks you for an external API, you’ll be able to just say “yes” and change internal(true) to internal(false) instead of rewriting the whole thing.

Today, our SDK announcement tackles the most salient problem with REST: type safety.

Our next project is building out a development framework that enables users to ship quality, typesafe REST APIs from any TypeScript backend. With the upcoming Stainless API framework, you declare the shape and behavior of your API in declarative TypeScript code and get an OpenAPI specification, documentation, and typed frontend client without a build step.

We’re building the framework around REST API design conventions that support rich pagination, consistent errors, field inclusion and selection, and normalized caching on the frontend. These conventions, influenced by best practices at Stripe, can help your team achieve consistent, high-quality results without untold hours of bikeshedding.

“The cool part about this is that you can define your API once and get client libraries in every language for free, whether you are an expert in those languages or not.

It’s rare that a startup will have people who know Python, Node, Ruby, Rust, Go, Java, etc, etc. But now they can market to all those developers at once.”

Calvin French-Owen, co-founder, Segment

Producing a good SDK is more involved than many developers may realize, especially when relying on code generation. The details matter, and it’s not just about pretty code—it’s about making the right choices and balancing some challenging tradeoffs between the characteristics of REST APIs and the idioms of the language at hand.

Here are a few generic examples:

  • How do you handle response enums in Java? The obvious approach can result in crashes when adding a new variant in the future.
  • If your API introduces a union type, how do you express that in Go, given that the language does not have a standard way to express union types?
  • If unexpected data comes back from the server—whether due to a beta feature, an edge case, or a bug—how do you expose that data to the user? Should the client library treat it like an error? Is there an idiomatic way to achieve this across every programming language? Finding a good solution requires carefully weighing conflicting type safety and runtime safety considerations.
  • Should you automatically retry on 429 or 503 errors? How quickly? What if the API is experiencing a production outage?
  • What should you call the method for /v1/invoices/{id}/void in a Java client library? (hint: it can’t be void).

Note that last problem can’t simply be decided by a machine—it requires context about the rest of the API, and must be decided by a human (even if an LLM can offer a first guess).

The void endpoint example is obviously an edge case, but the general question of what to name each method and type is as pernicious as it is pedestrian. If an SDK generator infers all names directly from the OpenAPI specification—particularly a specification generated from other sources—users may be confronted with nonsensical types like AccountWrapperConfigurationUnionMember4 that raise questions about your company’s overall engineering quality.

If you ship an SDK without first addressing these issues, you risk locking yourself into design blunders that you may not be able to resolve later without breaking backwards compatibility in ways that are highly disruptive to users.

We shared the void example above because it is easy to understand, but there are a range of potential pitfalls that are even more subtle and abstruse that inevitably arise in non-trivial APIs. We can build tools to identify such issues, but deciding how to resolve them is often beyond the scope of what can be achieved with automation—even with AI. You need a human being with relevant context and sound judgement to assess the options and make an informed decision.

From experience, we knew that thoughtful SDK development is a lot more difficult than it seems—auditing every single type name in a typical, medium-sized API requires scanning through tens of thousands of lines of code.

To enable every developer to ship with the same level of care that we devote to our enterprise clients, we created an SDK Studio that highlights potential problems and makes it easy to quickly scan through all the things you may want to review before shipping a v1:

The Stainless SDK Studio

To start using the Stainless SDK generator, all you need is an OpenAPI specification.

Within a few minutes, you’ll get alpha SDKs you can publish to package managers—and after a bit of polishing, something you’re proud to release as v1.0.0.

To get started, check out our documentation or connect your GitHub account.

Snowflake Arctic Instruct (128x3B MoE), largest open source model

replicate.com - Comments

Pricing

This language model is priced by how many input tokens are sent as inputs and how many output tokens are generated.

Check out our docs for more information about how per-token pricing works on Replicate.

Readme

Arctic is a dense-MoE Hybrid transformer architecture pre-trained from scratch by the Snowflake AI Research Team. We are releasing model checkpoints for both the base and instruct-tuned versions of Arctic under an Apache-2.0 license. This means you can use them freely in your own research, prototypes, and products. Please see our blog Snowflake Arctic: The Best LLM for Enterprise AI — Efficiently Intelligent, Truly Open for more information on Arctic and links to other relevant resources such as our series of cookbooks covering topics around training your own custom MoE models, how to produce high-quality training data, and much more.

For the latest details about Snowflake Arctic including tutorials, etc. please refer to our github repo: https://github.com/Snowflake-Labs/snowflake-arctic

Model developers Snowflake AI Research Team

License Apache-2.0

Input Models input text only.

Output Models generate text and code only.

Model Release Date April, 24th 2024.

Model Architecture

Arctic combines a 10B dense transformer model with a residual 128x3.66B MoE MLP resulting in 480B total and 17B active parameters chosen using a top-2 gating. For more details about Arctic’s model Architecture, training process, data, etc. see our series of cookbooks.

Biden signs TikTok bill into law, starting clock for ByteDance to divest

www.theverge.com - Comments

The divest-or-ban bill is now law, starting the clock for ByteDance to make its move. The company has an initial nine months to sort out a deal, though the president could extend that another three months if he sees progress.

While just recently the legislation seemed like it would stall out in the Senate after being passed as a standalone bill in the House, political maneuvering helped usher it through to Biden’s desk. The House packaged the TikTok bill — which upped the timeline for divestment from the six months allowed in the earlier version — with foreign aid to US allies, which effectively forced the Senate to consider the measures together. The longer divestment period also seemed to get some lawmakers who were on the fence on board.

TikTok spokesperson Alex Haurek said in a statement that the company plans to challenge the law in the courts, which could ultimately extend the timeline should the courts delay enforcement pending a resolution. There also remains the question of how China will respond and whether it would let ByteDance sell TikTok and, most importantly, its coveted algorithm that keeps users coming back to the app.

“As we continue to challenge this unconstitutional ban, we will continue investing and innovating to ensure TikTok remains a space where Americans of all walks of life can safely come to share their experiences, find joy, and be inspired,” Haurek said. 

“Make no mistake, this is a ban,” TikTok CEO Shou Chew said in a video posted on TikTok Wednesday, objecting to some lawmakers’ assertions that they just want to see the platform disconnected from Chinese ownership. “A ban on TikTok and a ban on you and your voice.”

Update, April 24th: The article has been updated with an official statement from a TikTok spokesperson and its CEO.

Sysadmin friendly high speed Ethernet switching

blog.benjojo.co.uk - Comments

Apr 18 2024

A skrillex style logo, but it says mellonx instead

I’ve been on the lookout for a ethernet switch that I don’t hate, the problem with a lot of higher speed (10G and above) ethernet switches is that they are quite expensive new and if you buy the used then they rarely have many years left ( or none at all ) of software patches.

A lot of the low end market for ethernet switches also have infamously bad software and one of the things that annoys me the most about the networking industry as a whole is that a lot of the cheap equipment has no real way of doing software support yourself.

So, I was very happy to learn that a friend had a Mellanox SN2010 that they were not using and were willing to sell to me. The SN2010 (Or the HP branded SKU that I picked up SN2010M) is a 18xSFP28 (25gbit) and 4xQSFP28 (100gbit) switch that instead of being your typical switch that uses a broadcom chipset for the data plane ( the bit that actually switches the packets ), uses Mellanox’s (now nvidia) own silicon. The massive benefit of this is that the drivers (mlxsw) for the Mellanox chip are open to people who don’t want to pay large volumes of money for a SDK, unlike the broadcom counterparts.

So I took a punt, and bought it.

A half width 1U height switch, poking out of the top of a bag

The goal that I now have is to run this relatively cheap (and power efficient at 60W) switch with as close to stock Debian as possible. That way I do not have to lean on any supplier for software updates, and I can upgrade software on the switch for as long as I need it. This does mean however that the bugs will be my responsibility (since there is no TAC to fall back on).

A lot of the stuff I am going to present will be similar to Pim’s research for a similar SN2700 but I will focus on how I’ve deployed my setup, now that I’ve been running this setup for some time in production without any hitches. Like the SN2700 is a 32x 100G port device, newer, larger and faster versions of these switches are also available.

First though, let’s take a peek…

The insides of a switch, there are 4 boards, 2 back PSUs, 1 main switch board, and a riser board containing the control plane

Here we see two reasonably integrated 12V DC power supplies. They are not hot swappable, however in my production experience so far I have not encountered a PSU failure where a hot swap unit would have been useful. So that is not a production concern to me.

In addition the fan tray can easily be swapped around, meaning you can easily transition this from being a Ports-to-Power airflow to Power-to-Ports airflow. I use the switch in Power-to-Ports as it’s easier to access the optical ports from the back of the rack than the front for my use case.

One thing of note while we have the lid open, Inside of the switch is a mystery QSFP connector with no cage, I’m unsure what it is used for, but I’m not willing to risk sticking an optic in it and find out!

A picture of a QSFP electrical connector without a cage on the side of a switch PCB

Because the switch is sold under “open networking” it comes with the ONIE installation system.

My previous experience with ONIE suggests that it is not going to be incredibly useful for our use case, so we will not be using it, instead opting for a more SN2010 specific method. If you squint, this switch basically looks like a laptop with an absolutely massive network card installed into it:

A block diagram of the SN2010

So we can just install Debian on it as if it was a laptop. So first, we need to get into the firmware/BIOS of the switch.

First you will need (really) a USB Keyboard to get into the BIOS, when the switch boots up mash F7, if you are lucky Ctrl+B will work over the serial console, but do not count on this working.

If you have done this correctly a BIOS password prompt will show up, the BIOS password is typically “admin”

Once you are into the BIOS you can remove the USB keyboard and replace it with a USB drive with a debian netinstall, then using the serial console navigate to the EFI Shell to boot grub from the USB drive.

Play recording

When you get to grub you need to teach the installer about the serial console, since it will be critical for you to be able to install from, you can do this by adding “console=tty0 console=ttyS0,115200” to the end of the boot options.

Once you have done that, you can proceed with a normal debian install, you will only see one NIC for now, since that is the built in NIC from the Intel Atom SOC that is the “management” ethernet port next to the serial console

Once you have finished installing, you will need to also apply the same “console=tty0 console=ttyS0,115200” into your boot options on first boot, and then set that up to be a permanent grub configuration.

Now that we have a working debian system, we can observe that we only currently have a single NIC, but we do have a mystery PCI device.

# lspci
00:00.0 Host bridge: Intel Corporation Atom processor C2000 SoC Transaction Router (rev 03)
00:01.0 PCI bridge: Intel Corporation Atom processor C2000 PCIe Root Port 1 (rev 03)
00:02.0 PCI bridge: Intel Corporation Atom processor C2000 PCIe Root Port 2 (rev 03)
00:03.0 PCI bridge: Intel Corporation Atom processor C2000 PCIe Root Port 3 (rev 03)
00:0b.0 Co-processor: Intel Corporation Atom processor C2000 QAT (rev 03)
00:0e.0 Host bridge: Intel Corporation Atom processor C2000 RAS (rev 03)
00:0f.0 IOMMU: Intel Corporation Atom processor C2000 RCEC (rev 03)
00:13.0 System peripheral: Intel Corporation Atom processor C2000 SMBus 2.0 (rev 03)
00:14.0 Ethernet controller: Intel Corporation Ethernet Connection I354 (rev 03)
00:16.0 USB controller: Intel Corporation Atom processor C2000 USB Enhanced Host Controller (rev 03)
00:17.0 SATA controller: Intel Corporation Atom processor C2000 AHCI SATA2 Controller (rev 03)
00:18.0 SATA controller: Intel Corporation Atom processor C2000 AHCI SATA3 Controller (rev 03)
00:1f.0 ISA bridge: Intel Corporation Atom processor C2000 PCU (rev 03)
00:1f.3 SMBus: Intel Corporation Atom processor C2000 PCU SMBus (rev 03)
01:00.0 Ethernet controller: Mellanox Technologies MT52100

To allow the switch to command the switch chip and see the other front panel ports we will need to use a kernel that has the mlx-core module, this module is not compiled with the “stock” debian kernels.

This is a case of ensuring the following modules in the kernel build config are set:

Click to expand to see kernel options

CONFIG_NET_IPIP=m
CONFIG_NET_IPGRE_DEMUX=m
CONFIG_NET_IPGRE=m
CONFIG_IPV6_GRE=m
CONFIG_IP_MROUTE_MULTIPLE_TABLES=y
CONFIG_IP_MULTIPLE_TABLES=y
CONFIG_IPV6_MULTIPLE_TABLES=y
CONFIG_BRIDGE=m
CONFIG_VLAN_8021Q=m
CONFIG_BRIDGE_VLAN_FILTERING=y
CONFIG_BRIDGE_IGMP_SNOOPING=y
CONFIG_NET_SWITCHDEV=y
CONFIG_NET_DEVLINK=y
CONFIG_MLXFW=m
CONFIG_MLXSW_CORE=m
CONFIG_MLXSW_CORE_HWMON=y
CONFIG_MLXSW_CORE_THERMAL=y
CONFIG_MLXSW_PCI=m
CONFIG_MLXSW_I2C=m
CONFIG_MLXSW_MINIMAL=y
CONFIG_MLXSW_SWITCHX2=m
CONFIG_MLXSW_SPECTRUM=m
CONFIG_MLXSW_SPECTRUM_DCB=y
CONFIG_LEDS_MLXCPLD=m
CONFIG_NET_SCH_PRIO=m
CONFIG_NET_SCH_RED=m
CONFIG_NET_SCH_INGRESS=m
CONFIG_NET_CLS=y
CONFIG_NET_CLS_ACT=y
CONFIG_NET_ACT_MIRRED=m
CONFIG_NET_CLS_MATCHALL=m
CONFIG_NET_CLS_FLOWE=m
CONFIG_NET_ACT_GACT=m
CONFIG_NET_ACT_MIRRED=m
CONFIG_NET_ACT_SAMPLE=m
CONFIG_NET_ACT_VLAN=m
CONFIG_NET_L3_MASTER_DEV=y
CONFIG_NET_VRF=m

If you are not able to compile a kernel yourself, and you can try with my pre-compiled kernels (that come with zero support/security updates/guarantee) here: https://benjojo.co.uk/fp/mlx-sw-kernel-debs.tar

The Linux kernel driver is expecting a specific version of firmware to be running on the switch chip, so after you reboot with the new kernel you might still not have all of the interfaces. You can look in dmesg for something like:

[ 7.168728] mlxsw_spectrum 0000:01:00.0: The firmware version 13.1910.622 is incompatible with the driver (required >= 13.2010.1006)

We can get these firmware blobs from https://switchdev.mellanox.com/firmware/, and extract them to /usr/lib/firmware/mellanox, for example the file path for the above dmesg line should be /usr/lib/firmware/mellanox/mlxsw_spectrum-13.2010.1006.mfa2, once you have put it there you may also want to run update-initramfs -u -k all and reboot and wait (for at least 10 mins) for the driver to automatically upgrade the chip firmware.

If you are running HPE or Catchpoint SKUs of this switch, the kernel driver may fail to upgrade the firmware with something like:

mlxfw: Firmware flash failed: Could not lock the firmware FSM, err (-5)

If you encounter this try compiling and using the user space tool and running the upgrade manually

$ mstfwmanager -d 01:00.0 -i mlxsw_spectrum-13.2000.2308.mfa -f -u

If successful, the upgrade should look like:

Device #1:
----------

  Device Type:      Spectrum
  Part Number:      Q9E63-63001_Ax
  Description:      HPE StoreFabric SN2010M 25GbE 18SFP28 4QSFP28 Half Width Switch
  PSID:             HPE0000000025
  PCI Device Name:  01:00.0
  Base MAC:         1c34daaaaa00
  Versions:         Current        Available   

     FW             13.1910.0622   13.2010.1006  
  Status:           Update required

---------
Found 1 device(s) requiring firmware update...
Device #1: Updating FW ...    
[4 mins delay]
Done

Restart needed for updates to take effect.

Assuming the upgrade succeeds, reboot the switch and you should see a extra 20 network interfaces appear in ip link

You can double check your chip versions by running:

# devlink dev info
pci/0000:01:00.0:
  driver mlxsw_spectrum
  versions:
      fixed:
        hw.revision A1
        fw.psid HPE0000000025
      running:
        fw.version 13.2010.1006
        fw 13.2010.1006

You will likely want to apply udev rules to ensure these interfaces are named in a way that makes a bit more sense, otherwise you can physically locate each port by blinking their port LEDs with ethtool -m swp1

I use the udev rules from Pim’s guide on the SN2700:

# cat << EOF > /etc/udev/rules.d/10-local.rules
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="mlxsw_spectrum*", \
    NAME="sw$attr{phys_port_name}"
EOF

Once you reboot your front panel switch interface names should now be swp* interfaces that should match roughly with the numbers on the front.

If you are ever unsure what you are port you are looking at on the CLI you can “eyeball” what port is what by using the port speed indicator from ethtool, for example, a 100G QSFP28 port looks like:

root@bgptools-switch:~# ethtool swp20
Settings for swp20:
    Supported ports: [ FIBRE ]
    Supported link modes:   1000baseKX/Full
                            10000baseKR/Full
                            40000baseCR4/Full
                            40000baseSR4/Full
                            40000baseLR4/Full
                            25000baseCR/Full
                            25000baseSR/Full
                            50000baseCR2/Full
                            100000baseSR4/Full
                            100000baseCR4/Full
                            100000baseLR4_ER4/Full

These ports can be configured as you would a normal “software” linux router interface, complete with the routing table as well. Except most configuration you are providing to linux is automatically replicated to the ASIC for you. In my case I will use ifupdown to manage my interface configuration, as it is the easiest for me to debug if it ever goes wrong.

This allows you to have 800Gbits+ of capacity managed by a 4 Core Intel Atom CPU!

Now that we have a working router, we can just set things up like we would a normal Linux “soft” router, except this swooter (a name I use for a switches that function as IP routers as well) can copy the setup we build inside of Linux, and put it into a data plane capable of multiples of 100 Gbit/s.

For the sake of this post, I will go over the setup I’ve been running in production to show you what this switch has to offer:

A diagram of how I have set up my switch

I have two uplink ports from my provider, they are VLAN tagged with a DIA/Internet with point to point addresses (/31 for IPv4, /127 for IPv6) that have BGP on them. There are a number of other IP services that are delivered using different VLANs on one of the provider ports.

In my previous setup I would be doing BGP on one of the servers, but the switch can handle both of these BGP sessions. However it’s worth knowing that the switch cannot handle “full” internet BGP tables, but I requested that my provider send IPv4 and IPv6 default routes on both BGP sessions to solve that problem.

In my previous setup, all of my servers sat in a private production VLAN with OSPF coordinating the IP addressing between them. Since I don’t want to change too many things at once. I’ve replicated the same thing by chaining most servers ethernet ports into a newly made “br-rack” Linux Bridge

There is however one server that needs this bridge in a VLAN, however that is fine since you can just make a linux VLAN interface and add that to the bridge and the driver will automatically figure this out.

Using ifupdown language, this is what the port configuration looks like in /etc/network/interfaces:

auto swp4
iface swp4 inet manual
iface swp4 inet6 manual

auto swp4.400 # "Rack LAN"
auto swp4.700 # Service1
auto swp4.701 # Service2

auto br-rack0

iface br-rack0 inet static
        bridge_hw swp5
        bridge-ports swp4.400 swp5 swp6 swp7 swp8 swp9 swp10
        bridge_stp off
        bridge_waitport 0
        bridge_fd 0
        address 185.230.223.xxx/28
        post-up ip l set dev br-rack0 type bridge mcast_snooping 0


iface br-rack0 inet6 static
        dad-attempts 0
        address 2a0c:2f07:4896:xxx/120

You will want to ensure you have set bridge mcast_snooping 0 if you plan on using OSPF, as if you have snooping enabled without extra services running on the switch, multicast traffic (including OSPF) can be disrupted.

You will also want to set bridge_hw to a switch port of your choice. Due to hardware limitations the switch chip has to use 1 range of MAC addresses for things that relate/route to it. So the bridge_hw option just “steals” the MAC address of a port and uses that for the bridge.

At this point you can just configure BGP and OSPF as you normally would, and install/export the routes into the kernel, However since the hardware can only hold around 80,000 routes some care needs to be taken to ensure that you only “install” your own Internal/OSPF routes and your provider BGP default routes.

For example, my own bird config looks like:

protocol kernel {
        merge paths on;
        ipv4 {                        
                export filter {
                        if net ~ [0.0.0.0/0{0,0},185.230.223.0/24{24,32}] || source = RTS_OSPF || source = RTS_OSPF_EXT2 || source = RTS_OSPF_EXT1 then {
                                accept;
                        }
                        reject;
                };
        };
}

The merge paths on option allows the switch to ECMP over routes, useful for your default routes

root@bgptools-switch:~# ip route
default proto bird metric 32 rt_offload
        nexthop via 192.0.2.1 dev swp1.600 weight 1 offload
        nexthop via 192.0.2.2 dev swp2.601 weight 1 offload
198.51.100.0/28 via 185.230.223.xxx dev br-rack0 proto bird metric 32 offload rt_offload
203.0.113.0/24 via 185.230.223.xxx dev br-rack0 proto bird metric 32 offload rt_offload

It is worth pointing out that you should also setup a sane and sensible SSH policy and firewalling. You could easily just apply the same solution that you use for your servers. Like Salt/Chef/Puppet/Ansible, after all, this is just like a server with a magic NIC in it!

There are also some good linux sysctl options you should set to make your swooter act more like a hardware router is expected to. As per the mlxsw wiki recommends:

Click to expand to see sysctls

# Enable IPv4 and IPv6 forwarding.
net.ipv4.ip_forward=1
net.ipv6.conf.all.forwarding=1
net.ipv6.conf.default.forwarding=1

# Keep IPv6 addresses on an interface when it goes down. This is
# consistent with IPv4.
net.ipv6.conf.all.keep_addr_on_down=1
net.ipv6.conf.default.keep_addr_on_down=1
# Prevent the kernel from routing packets via an interface whose link is
# down. This is not strictly necessary when a routing daemon is used as
# it will most likely evict such routes. In addition, when offloaded,
# such routes will not be considered anyway since the associated neighbour
# entries will be flushed upon the carrier going down, preventing the
# device from determining the destination MAC it should use.
net.ipv4.conf.all.ignore_routes_with_linkdown=1
net.ipv6.conf.all.ignore_routes_with_linkdown=1
net.ipv4.conf.default.ignore_routes_with_linkdown=1
net.ipv6.conf.default.ignore_routes_with_linkdown=1

# Use a standard 5-tuple to compute the multi-path hash.
net.ipv4.fib_multipath_hash_policy=1
net.ipv6.fib_multipath_hash_policy=1
# Generate an unsolicited neighbour advertisement when an interface goes
# down or its hardware address changes.
net.ipv6.conf.all.ndisc_notify=1
net.ipv6.conf.default.ndisc_notify=1

# Do not perform source validation when routing IPv4 packets. This is
# consistent with the hardware data path behavior. No configuration
# is necessary for IPv6.
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
# Do not update the SKB priority from "TOS" field in IP header after
# the packet is forwarded. This applies to both IPv4 and IPv6 packets
# which are forwarded by the device.
net.ipv4.ip_forward_update_priority=0

# Prevent the kernel from generating a netlink event for each deleted
# IPv6 route when an interface goes down. This is consistent with IPv4.
net.ipv6.route.skip_notify_on_dev_down=1
# Use neighbour information when choosing a nexthop in a multi-path
# route. Will prevent the kernel from routing the packets via a
# failed nexthop. This is consistent with the hardware behavior.
net.ipv4.fib_multipath_use_neigh=1

# Increase the maximum number of cached IPv6 routes. No configuration is
# necessary for IPv4.
net.ipv6.route.max_size=16384
# In case the number of non-permanent neighbours in the system exceeds
# this value for over 5 seconds, the garbage collector will kick in.
# Default is 512, but if the system has a larger number of interfaces or
# expected to communicate with a larger number of directly-connected
# neighbours, then it is recommended to increase this value.
net.ipv4.neigh.default.gc_thresh2=8192
net.ipv6.neigh.default.gc_thresh2=8192

# In case the number of non-permanent neighbours in the system exceeds
# this value, the garbage collector will kick in. Default is 1024, but
# if the system has a larger number of interfaces or expected to
# communicate with a larger number of directly-connected neighbours,
# then it is recommended to increase this value.
net.ipv4.neigh.default.gc_thresh3=16384
net.ipv6.neigh.default.gc_thresh3=16384

Thanks to the switch chip, almost all traffic going through the switch will not be visible to the Debian side of the system. This does mean that you will not be able to use nf/iptables on forwarded traffic, however the switch driver does allow some Linux Traffic Control (tc) rules that use the “flower” system to be inserted into hardware, For example:

tc qdisc add dev swp1 clsact

# Rate limit UDP from port swp1 going to a IP address to 10mbit/s
tc filter add dev swp1 ingress protocol ip pref 10 \
        flower skip_sw dst_ip 192.0.2.1 ip_proto udp \
        action police rate 10mbit burst 16k conform-exceed drop/ok

# Drop TCP SYN packets from swp1 going to 192.0.2.2
tc filter add dev swp1 ingress protocol ip pref 20 \
        flower dst_ip 192.0.2.2 ip_proto tcp tcp_flags 0x17/0x02 \
        action drop

You can monitor the results of these rules using tc -s filter show swp1 ingress

# tc -s filter show dev swp2 ingress
filter protocol ip pref 10 flower chain 0
filter protocol ip pref 10 flower chain 0 handle 0x1
  eth_type ipv4
  ip_proto udp
  dst_ip 192.0.2.1
  skip_sw
  in_hw in_hw_count 1
        action order 1:  police 0x7 rate 10Mbit burst 16Kb mtu 2Kb action drop overhead 0b
        ref 1 bind 1  installed 3615822 sec used 1 sec
        Action statistics:
        Sent 3447123283 bytes 4481404 pkt (dropped 1920284, overlimits 0 requeues 0)
        Sent software 0 bytes 0 pkt
        Sent hardware 3447123283 bytes 4481404 pkt
        backlog 0b 0p requeues 0
        used_hw_stats immediate
...

Useful examples of flower rules include:

# Target UDP to a IP range
flower skip_sw dst_ip 192.0.2.0/24 ip_proto udp
  
# Target TCP port 80 to any IP
flower skip_sw src_port 80 ip_proto tcp
 
# Target all GRE packets
flower skip_sw ip_proto 47

You must ensure that you do not put skip_sw in your rule if you plan to drop packets, else your ACL could be bypassed if a packet was engineered to trigger a control plane punt.

I do not know any good utility to manage these rules for you, Instead I have a shell script that applies them on boot using a systemd service.

Since there are two sides (the CPU side and the chip side) to this switch, it is useful to monitor both of them, The driver keeps the regular kernel counters in sync with how much the chip is doing for you automatically:

# ip -s -h l show dev swp1
24: swp1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 1c:34:da:xx:xx:xx brd ff:ff:ff:ff:ff:ff
    RX:  bytes packets errors dropped  missed   mcast          
         2.01T   4.74G      0       0       0    214M
    TX:  bytes packets errors dropped carrier collsns          
         1.10T   5.78G      0       0       0       0
    altname enp1s0np1

If you want to just know how much traffic a interfaces has been sent to the Atom CPU, you can run this:

# ip -h stats show dev swp1 group offload subgroup cpu_hit
24: swp1: group offload subgroup cpu_hit
    RX:  bytes packets errors dropped  missed   mcast          
         25.3G    264M      0       0       0       0
    TX:  bytes packets errors dropped carrier collsns          
          442M   5.14M      0       0       0       0

This 25GB here is roughly speaking the BGP traffic that the switch has done with the service provider since boot.

However if you are looking for counters on why data was sent to the CPU, you can run the following to get the counters:

# devlink -s trap | grep -v pci/0000:01:00.0 | paste - - - - | grep -v "bytes 0"
  name ttl_value_is_too_small type exception generic true action trap group l3_exceptions            stats:                rx:                  bytes 20084104 packets 227812
...
  name ipv6_ospf type control generic true action trap group ospf            stats:                rx:                  bytes 71177822 packets 591399
  name ipv4_bgp type control generic true action trap group bgp            stats:                rx:                  bytes 1626384210 packets 3005173
  name ipv6_bgp type control generic true action trap group bgp            stats:                rx:                  bytes 2128110217 packets 3538356
...

Because the network interface counters are automatically synchronised, you can use your normal monitoring tools on your servers, on this switch. My setup is a blend of collectd and prometheus node_exporter, both of these tools work fine:

A screenshot of a RRD graph showing bandwidth on a switch port

Normal packet sampling methods do not work on this switch, because as mentioned above the CPU side of the switch is nearly totally oblivious to most traffic passing through the switch. However this becomes a problem when you wish to do packet sampling for traffic statistics, or to drive something like FastNetMon for DDoS detection.

However not all is lost, hsflowd does support the driver’s “psample” system for gathering data.

My hsflowd config is as follows:

sflow {
  sampling.10G=10000
  collector { ip=192.0.2.1 UDPPort=6666 }
  psample { group=1 egress=on }
  dent { sw=on switchport=swp.* }
}

Since hsflowd has an incompatible software licence with most distros, you will have to build it yourself. However I find that once compiled, hsflowd automatically manages the tc rules required for packet samples.

I think this is incredibly nice hardware, with even more incredible open source drivers. However I do worry that Nvidia will fall down a similar path to the late Nortel at this point due to their meteoric rise in an industry that could easily be a bubble. For that reason it is worth calling out that they are not the only vendor with this kind of open source driver functionality.

Arista has a closed source driver like this where you can supplement parts of their “EOS” with your own parts, however it is nowhere near as complete as this. But it does allow you to run bird (or other routing software) on their products if you wish to retain control of the code powering of your routing protocols.

Marvell also apparently has drivers similar to mlxsw, I have yet to personally use such hardware, but Mikrotik is known to use this hardware, but right now has no official (or known) way to “jailbreak” the hardware to run your own software stack.

I hope this changes in the future, as Mikrotik’s hardware price point is very competitive, it’s just the software reliability that always turns me off their products, so having an option to not use RouterOS while keeping their very competitive hardware would be a huge deal.

I agree with Pim’s conclusion, this switch and its ecosystem is incredible. Good, and acquirable hardware combined with software that you have the power to fix yourself is currently unheard of in the industry, and mellanox delivered it!

This setup has been running without a hitch for bgp.tools for some time now, and I hope to keep it running until I outscale it one day, since I would be surprised if I need to remove it for any other reason.

I’d like to thank Pim van Pelt for their earlier post on these devices and Basil Filian for helping me figure out a number of quirks of these devices!

If you want to stay up to date with the blog you can use the RSS feed or you can follow me on Fediverse @[email protected]!

Until next time!

Piet: Programming language in which programs look like abstract paintings (2002)

www.dangermouse.net - Comments


Composition with Red, Yellow and Blue by Piet Mondrian
Composition with Red,
Yellow and Blue
.
1921, Piet Mondrian.

Introduction

Piet is a programming language in which programs look like abstract paintings. The language is named after Piet Mondrian, who pioneered the field of geometric abstract art. I would have liked to call the language Mondrian, but someone beat me to it with a rather mundane-looking scripting language. Oh well, we can't all be esoteric language writers I suppose.

Notes:

  • I wrote the Piet specification a long time ago, and the language has taken on a bit of a life of its own, with a small community of coders writing Piet programs, interpreters, IDEs, and even compilers. I have not written any "authoritative" interpreter, and the different ones available sometimes interpret the specification slightly differently.
  • Over the years I have tended to field questions about the spec with "whatever you think makes the most sense", rather than any definitive clarification - thus the slightly different versions out there. I have now added some clarifications to this specification to address some of the questions I have been asked over the years. Hopefully they are sensible and most implementations will already be compliant, but it's possible some do not comply. Caveat emptor.
  • Some people like to use Piet to set puzzles in various competitions. This web page and the linked resources can help you solve those puzzles, if you have a reasonable grasp of computer coding. If you do not, or it looks too difficult, I suggest asking some of your friends who may be computer programmers to help you. Please do not email to ask me for help. Although I wish you the best in solving your puzzle, I do not have time to help everyone in this situation.

Design Principles

  • Program code will be in the form of abstract art.

Language Concepts

Colours

#FFC0C0
light red
#FFFFC0
light yellow
#C0FFC0
light green
#C0FFFF
light cyan
#C0C0FF
light blue
#FFC0FF
light magenta
#FF0000
red
#FFFF00
yellow
#00FF00
green
#00FFFF
cyan
#0000FF
blue
#FF00FF
magenta
#C00000
dark red
#C0C000
dark yellow
#00C000
dark green
#00C0C0
dark cyan
#0000C0
dark blue
#C000C0
dark magenta
#FFFFFF white #000000 black

Piet uses 20 distinct colours, as shown in the table at right. The 18 colours in the first 3 rows of the table are related cyclically in the following two ways:

  • Hue Cycle: red -> yellow -> green -> cyan -> blue -> magenta -> red
  • Lightness Cycle: light -> normal -> dark -> light

Note that "light" is considered to be one step "darker" than "dark", and vice versa. White and black do not fall into either cycle.

Additional colours (such as orange, brown) may be used, though their effect is implementation-dependent. In the simplest case, non-standard colours are treated by the language interpreter as the same as white, so may be used freely wherever white is used. (Another possibility is that they are treated the same as black.)

Codels

Piet code takes the form of graphics made up of the recognised colours. Individual pixels of colour are significant in the language, so it is common for programs to be enlarged for viewing so that the details are easily visible. In such enlarged programs, the term "codel" is used to mean a block of colour equivalent to a single pixel of code, to avoid confusion with the actual pixels of the enlarged graphic, of which many may make up one codel.

Colour Blocks

The basic unit of Piet code is the colour block. A colour block is a contiguous block of any number of codels of one colour, bounded by blocks of other colours or by the edge of the program graphic. Blocks of colour adjacent only diagonally are not considered contiguous. A colour block may be any shape and may have "holes" of other colours inside it, which are not considered part of the block.

Stack

Piet uses a stack for storage of all data values. Data values exist only as integers, though they may be read in or printed as Unicode character values with appropriate commands.

The stack is notionally infinitely deep, but implementations may elect to provide a finite maximum stack size. If a finite stack overflows, it should be treated as a runtime error, and handling this will be implementation dependent.

Program Execution

DPCCCodel chosen
rightleftuppermost
rightlowermost
downleftrightmost
rightleftmost
leftleftlowermost
rightuppermost
upleftleftmost
rightrightmost

The Piet language interpreter begins executing a program in the colour block which includes the upper left codel of the program. The interpreter maintains a Direction Pointer (DP), initially pointing to the right. The DP may point either right, left, down or up. The interpreter also maintains a Codel Chooser (CC), initially pointing left. The CC may point either left or right. The directions of the DP and CC will often change during program execution.

As it executes the program, the interpreter traverses the colour blocks of the program under the following rules:

  1. The interpreter finds the edge of the current colour block which is furthest in the direction of the DP. (This edge may be disjoint if the block is of a complex shape.)
  2. The interpreter finds the codel of the current colour block on that edge which is furthest to the CC's direction of the DP's direction of travel. (Visualise this as standing on the program and walking in the direction of the DP; see table at right.)
  3. The interpreter travels from that codel into the colour block containing the codel immediately in the direction of the DP.

The interpreter continues doing this until the program terminates.

Syntax Elements

Numbers

Each non-black, non-white colour block in a Piet program represents an integer equal to the number of codels in that block. Note that non-positive integers cannot be represented, although they can be constructed with operators. When the interpreter encounters a number, it does not necessarily do anything with it. In particular, it is not automatically pushed on to the stack - there is an explicit command for that (see below).

The maximum size of integers is notionally infinite, though implementations may implement a finite maximum integer size. An integer overflow is a runtime error, and handling this will be implementation dependent.

Black Blocks and Edges

Black colour blocks and the edges of the program restrict program flow. If the Piet interpreter attempts to move into a black block or off an edge, it is stopped and the CC is toggled. The interpreter then attempts to move from its current block again. If it fails a second time, the DP is moved clockwise one step. These attempts are repeated, with the CC and DP being changed between alternate attempts. If after eight attempts the interpreter cannot leave its current colour block, there is no way out and the program terminates.

White Blocks

White colour blocks are "free" zones through which the interpreter passes unhindered. If it moves from a colour block into a white area, the interpreter "slides" through the white codels in the direction of the DP until it reaches a non-white colour block. If the interpreter slides into a black block or an edge, it is considered restricted (see above), otherwise it moves into the colour block so encountered. Sliding across white blocks into a new colour does not cause a command to be executed (see below). In this way, white blocks can be used to change the current colour without executing a command, which is very useful for coding loops.

Sliding across white blocks takes the interpreter in a straight line until it hits a coloured pixel or edge. It does not use the procedure described above for determining where the interpreter emerges from non-white coloured blocks.

Precisely what happens when the interpeter slides across a white block and hits a black block or an edge was not clear in the original specification. My interpretation follows from a literal reading of the above text:

  • The interpreter "slides" across the white block in a straight line.
  • If it hits a restriction, the CC is toggled. Since this results in no difference in where the interpreter is trying to go, the DP is immediately stepped clockwise.
  • The interpreter now begins sliding from its current white codel, in the new direction of the DP, until it either enters a coloured block or encounters another restriction.
  • Each time the interpreter hits a restriction while within the white block, it toggles the CC and steps the DP clockwise, then tries to slide again. This process repeats until the interpreter either enters a coloured block (where execution then continues); or until the interpreter begins retracing its route. If it retraces its route entirely within a white block, there is no way out of the white block and execution should terminate.

Commands

 Lightness change
Hue changeNone1 Darker2 Darker
None pushpop
1 Stepaddsubtractmultiply
2 Stepsdividemodnot
3 Stepsgreaterpointerswitch
4 Stepsduplicaterollin(number)
5 Stepsin(char)out(number)out(char)

Commands are defined by the transition of colour from one colour block to the next as the interpreter travels through the program. The number of steps along the Hue Cycle and Lightness Cycle in each transition determine the command executed, as shown in the table at right. If the transition between colour blocks occurs via a slide across a white block, no command is executed. The individual commands are explained below.

  • push: Pushes the value of the colour block just exited on to the stack. Note that values of colour blocks are not automatically pushed on to the stack - this push operation must be explicitly carried out.
  • pop: Pops the top value off the stack and discards it.
  • add: Pops the top two values off the stack, adds them, and pushes the result back on the stack.
  • subtract: Pops the top two values off the stack, calculates the second top value minus the top value, and pushes the result back on the stack.
  • multiply: Pops the top two values off the stack, multiplies them, and pushes the result back on the stack.
  • divide: Pops the top two values off the stack, calculates the integer division of the second top value by the top value, and pushes the result back on the stack. If a divide by zero occurs, it is handled as an implementation-dependent error, though simply ignoring the command is recommended.
  • mod: Pops the top two values off the stack, calculates the second top value modulo the top value, and pushes the result back on the stack. The result has the same sign as the divisor (the top value). If the top value is zero, this is a divide by zero error, which is handled as an implementation-dependent error, though simply ignoring the command is recommended. (See note below.)
  • not: Replaces the top value of the stack with 0 if it is non-zero, and 1 if it is zero.
  • greater: Pops the top two values off the stack, and pushes 1 on to the stack if the second top value is greater than the top value, and pushes 0 if it is not greater.
  • pointer: Pops the top value off the stack and rotates the DP clockwise that many steps (anticlockwise if negative).
  • switch: Pops the top value off the stack and toggles the CC that many times (the absolute value of that many times if negative).
  • duplicate: Pushes a copy of the top value on the stack on to the stack.
  • roll: Pops the top two values off the stack and "rolls" the remaining stack entries to a depth equal to the second value popped, by a number of rolls equal to the first value popped. A single roll to depth n is defined as burying the top value on the stack n deep and bringing all values above it up by 1 place. A negative number of rolls rolls in the opposite direction. A negative depth is an error and the command is ignored. If a roll is greater than an implementation-dependent maximum stack depth, it is handled as an implementation-dependent error, though simply ignoring the command is recommended.
  • in: Reads a value from STDIN as either a number or character, depending on the particular incarnation of this command and pushes it on to the stack. If no input is waiting on STDIN, this is an error and the command is ignored. If an integer read does not receive an integer value, this is an error and the command is ignored.
  • out: Pops the top value off the stack and prints it to STDOUT as either a number or character, depending on the particular incarnation of this command.

Any operations which cannot be performed (such as popping values when not enough are on the stack) are simply ignored, and processing continues with the next command.

Note on the mod command: In the original specification of Piet the result of a modulo operation with a negative dividend (the second top value popped off the stack) was not explicitly defined. I assumed that everyone would assume that the result of (p mod q) would always be equal to ((p + Nq) mod q) for any integer N. So:

  • 5 mod 3 = 2
  • 2 mod 3 = 2
  • -1 mod 3 = 2
  • -4 mod 3 = 2

The mod command is thus identical to floored division in Wikipedia's page on the modulus operation.

Sample Programs and Resources

Support me on Patreon patreon_16x16.png


Home | Esoteric Programming Languages
Last updated: Thursday, 27 September, 2018; 04:00:52 PDT.
Copyright © 1990-2022, David Morgan-Mar. [email protected]
Hosted by: DreamHost

Other People’s Problems

seths.blog - Comments

It’s surprisingly easy to be generous and find solutions to our friend’s problems.

Much easier than it is to do it for ourselves. Why?

There are two useful reasons, I think.

FIRST, because we’re unaware of all the real and imaginary boundaries our friends have set up. If it were easy to solve the problem, they probably would have. But they’re making it hard because they have decided that there are people or systems that aren’t worth challenging. Loosening the constraints always makes a problem easier to solve.

And SECOND, because resistance is real. Solving the problem means moving ahead, confronting new, even scarier problems. It might be easier to simply stay where we are, marinating in our stuck.

When we care enough to solve our own problem, we’ll loosen the unloosenable constraints and embrace the new challenges to come.

CoreNet: A library for training deep neural networks

github.com - Comments

CoreNet: A library for training deep neural networks

License

Additional navigation options

This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Footer

You can’t perform that action at this time.

Length and thickness of bamboo internodes: a beautiful curve

www.elegantexperiments.net - Comments

Here is a simple experiment about bamboo anatomy. The measurements are not thoroughly done, hence the results are not yet very robust. However, the first results already show quite an inspiring curve unveiling a beautiful natural pattern!

This post tries to follow the scientific format of research sharing, developed over the last centuries: the use of a simple structure for quick reading (introduction, method, results, discussion), the supply of all the details for anyone to be able to reproduce the experiment, and the supply of the dataset for anyone to reuse the results.

It is part of my bamboo craft project, where you can find bamboo craft ideas and tutorials.

Introduction: objective

Bamboo fibers are very straight in internodes and entangled in nodes. Pole walls tend to be thicker at the bottom of the pole and thinner at the top. These kinds of variations have a significant impact on how the craftsman can work with bamboo and what he can or cannot do with the different sections.

For instance, if a craftsman wants to cut a straight strip, its length will be limited by the length of the internode. Moreover, the strips might tend to be thick for an internode that is close to the bottom of the pole, and thin for an internode that is close to the top of the pole. To build my trivet I used for example an internode from the base of a bamboo pole. While for my underwear dryer, a bamboo section with slightly thinner walls — located higher up — was sufficient.

Even if many techniques can be used by the craftsman, for instance straightening the strips using heat, understanding closely the structure of bamboo poles can elegantly save time and energy. The craftsman will know better which pole to harvest for a specific use, and fewer steps and tools might then be necessary to reach the final crafted object. Experienced bamboo craftmen already have this knowledge. But for beginners, trying to understand bamboo by observing it is very useful.

From quick observations, the internodes of a bamboo pole seem shorter at the base and longer at the top; and bamboo pole wall seems thicker at the bottom and thinner at the top. Do these dimensions really follow patterns all along the pole, or are they actually hazardous? What might be the shapes of these patterns? And do these patterns depend on other variables, such as species?

Well, let’s have a look at how internode length and wall thickness vary along a bamboo pole!

Method: simple measurements

I harvested a full bamboo pole in northern Taiwan, and measured the length of each internode using a tape measure. I also measured the thickness of the wall at certain points using a ruler.

xbamboo-internode-pole.jpg.pagespeed.ic.WXajYdKdj-.jpg

Internode length

I started the measurement of internode length at the base of the pole, above the level of the small roots covering the lower node. The measurement was accurate to the nearest centimeter, due to irregularities in the bamboo pole: the limits corresponding to the nodes were not perpendicular to the axis of the pole, but slightly tilted; and the diameter of the pole tended to be larger at the level of nodes than in the internodes. Measurements are hence not very precise but enough to meet our objectives.

bamboo-internode.jpg.pagespeed.ce.OX7DeaYucY.jpg

Wall thickness

I measured the wall thickness at the level of cuts that I did close to 4 nodes. The measurement was accurate to the nearest 0.5 centimeters. As the wall thickness is not always even, I averaged 3 measures taken more or less at equidistance from one another. Measuring the wall thickness for each internode would be ideal, but it would have required to cut the pole in small sections, preventing me to use it for other purposes that I planned — it is in particular an internode of this bamboo from which I built the trivet!

bamboo-internode-experiments.jpg.pagespeed.ce.DYw21U0g8O.jpg

Complementary variables

I also recorded other variables to describe the context of the measurements:

  • Date: December 18, 2018
  • Location: northern Taiwan (24°50’43.4”N 121°26’07.8”E)
  • Species: Phyllostachys reticulate (Chinese: 桂竹)
  • Age: unknown, but > 1 year given the lichens developed on the pole
  • Outer diameter at the base of the pole: 7.8 cm (3 in) with an error of about 0.5 cm (0.2 in) due to irregularities
  • Full pole: no, some internodes appeared to be lacking at the top, they may have been eaten by animals or insects at a young age.

Results: a beautiful pattern

The bamboo pole contained 53 internodes and measured 13.62 m in total.

Internode length

The internode length was 10 cm at the base, increased till 38 cm roughly at the middle of the pole, and decreased till 13 cm at the top of the pole. Internodes measured 26 cm on average. The visual representation of the measurements suggests two distinct parts: the length of the internodes follows logarithmic-type dynamics in the lower half, then negative exponential-type dynamics in the upper half! So beautiful. Only bamboo can do this, right?

The visual representation of the measurements suggests two distinct parts: the length of the internodes follows logarithmic-type dynamics in the lower half, then negative exponential-type dynamics in the upper half!

bamboo-internode-curve.jpg.pagespeed.ce.mhEFQ7l1rx.jpg

Wall thickness

The wall thickness decreased from 12.5 mm on average at the base of the pole to 5 mm at the 14th internode. I did 3 measurements for each of the 4 internodes measured, but some dots are merged on the figure, when values were identical. I can infer a rough statistical model from these measurements, to predict the thickness of the wall depending on the location along the bamboo pole. No data was collected for the internodes after the 14th, but the model suggests a potential wall thickness of about 2 mm for the last internode.

bamboo-internode-thickness-curve.jpg.pagespeed.ce.zyhgVHipFk.jpg

Discussion: do you want to try?

These measurements already help in understanding the anatomy of a bamboo pole. However, the dataset is quite small and restricted to only one bamboo, which does not make for our statistical model to be robust, despite its apparently high R2 of about 0.99.

Improvements could be done thanks to (1) a better measurement technique, as well as (2) additional measurements of bamboo poles in other conditions:

  1. The length of each internode could be measured as the average of the longer and shorter lengths between the tilted limits of the internode. The thickness of the wall could be measured systematically at the middle of each internode, instead of close to the nodes, as the wall appear to be thicker close to the nodes. Moreover, measuring several bamboo poles for a given species in a given location, instead of only one, would also improve the reliability of the results.

  2. It is unlikely that internode length or wall thickness vary depending on the age of the bamboo pole, as bamboo does not have any secondary growth over its lifespan. However, it would be very interesting to see if there might be variations of patterns due to species and location!

Do you also have measurements? Let me know! By combining our works, we could get more interesting results!

No need to have a complete data set, nor all the complementary variables. We can perform interesting analysis even with missing data.

Last udpdate: Nov 18, 2020

Would you like to follow my explorations?

To share my experiments, I write a monthly newsletter, called my 🔭 Laboratory Logbook. I send this letter on the 1st day of each month to my gracious readers to update on my work in progress, my observations, my — hopefully elegant — experiments. If you like to follow my explorations, then I invite you to subscribe below ;)

Free letter, monthly (every 1st day of the month), 200+ subscribers, unsubscribe in 2 clicks. Of course, I don’t share your e-mail address.

Are you hesitating? Then, why not have a look at my previous newsletters, or check my reader’s testimonials!

VideoGigaGAN: Towards detail-rich video super-resolution

videogigagan.github.io - Comments

BibTeX

@article{xu2024videogigagan,
      title={VideoGigaGAN: Towards Detail-rich Video Super-Resolution}, 
      author={Yiran Xu and Taesung Park and Richard Zhang and Yang Zhou and Eli Shechtman and Feng Liu and Jia-Bin Huang and Difan Liu},
      year={2024},
      eprint={2404.12388},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
  }

Descartes's Stove

bloomsburyliterarystudiesblog.com - Comments