You are only seeing posts authors requested be public.
Register and Login to participate in discussions with colleagues.
Technology News
Bone Marrow Donors Can Be Hard to Find. One Company Is Turning to Cadavers
Celebrating the National Gallery on Google Arts & CultureCelebrating the National Gallery on Google Arts & CultureDirector of the National Gallery
Priceline Promo Codes: 10% Off in November 2024
Dyson Promo Code: 20% Off November 2024
Hoka Coupon Code: Free Shipping in November 2024
Providing more data about news results in the EUProviding more data about news results in the EUManaging Director, News and Publishing Partnerships
Donald Trump Taps Elon Musk, Vivek Ramaswamy to Lead Nonexistent Department of Government Efficiency
This elephant figured out how to use a hose to shower
An Asian elephant named Mary living at the Berlin Zoo surprised researchers by figuring out how to use a hose to take her morning showers, according to a new paper published in the journal Current Biology. “Elephants are amazing with hoses,” said co-author Michael Brecht of the Humboldt University of Berlin. “As it is often the case with elephants, hose tool use behaviors come out very differently from animal to animal; elephant Mary is the queen of showering.”
Tool use was once thought to be one of the defining features of humans, but examples of it were eventually observed in primates and other mammals. Dolphins have been observed using sea sponges to protect their beaks while foraging for food, and sea otters will break open shellfish like abalone with rocks. Several species of fish also use tools to hunt and crack open shellfish, as well as to clear a spot for nesting. And the coconut octopus collects coconut shells, stacking them and transporting them before reassembling them as shelter.
Birds have also been observed using tools in the wild, although this behavior was limited to corvids (crows, ravens, and jays), although woodpecker finches have been known to insert twigs into trees to impale passing larvae for food. Parrots, by contrast, have mostly been noted for their linguistic skills, and there has only been limited evidence that they use anything resembling a tool in the wild. Primarily, they seem to use external objects to position nuts while feeding.
New secret math benchmark stumps AI models and PhDs alike
On Friday, research organization Epoch AI released FrontierMath, a new mathematics benchmark that has been turning heads in the AI world because it contains hundreds of expert-level problems that leading AI models solve less than 2 percent of the time, according to Epoch AI. The benchmark tests AI language models (such as GPT-4o, which powers ChatGPT) against original mathematics problems that typically require hours or days for specialist mathematicians to complete.
FrontierMath's performance results, revealed in a preprint research paper, paint a stark picture of current AI model limitations. Even with access to Python environments for testing and verification, top models like Claude 3.5 Sonnet, GPT-4o, o1-preview, and Gemini 1.5 Pro scored extremely poorly. This contrasts with their high performance on simpler math benchmarks—many models now score above 90 percent on tests like GSM8K and MATH.
The design of FrontierMath differs from many existing AI benchmarks because the problem set remains private and unpublished to prevent data contamination. Many existing AI models are trained on other test problem datasets, allowing the AI models to easily solve the problems and appear more generally capable than they actually are. Many experts cite this as evidence that current large language models (LLMs) are poor generalist learners.
Oura’s Perimenopause Report Shows the Gap in Women’s Health Research
For the second time this year, NASA’s JPL center cuts its workforce
Barely nine months after the last cut, NASA's Jet Propulsion Laboratory will again reduce its workforce. On Wednesday, the lab will lay 325 employees off, representing about 5 percent of the workforce at the California-based laboratory that leads the development of robotic space probes for NASA.
"This is a message I had hoped not to have to write," JPL Director Laurie Leshin said in a memo to staff members on Tuesday morning, local time. "Despite this being incredibly difficult for our community, this number is lower than projected a few months ago thanks in part to the hard work of so many people across JPL."
The cuts this week follow a reduction of 530 employees in February of this year due to various factors, including a pause in funding for the Mars Sample Return mission. The NASA laboratory has now cut about one-eighth of its workforce this year.
What if AI doesn’t just keep getting better forever?
For years now, many AI industry watchers have looked at the quickly growing capabilities of new AI models and mused about exponential performance increases continuing well into the future. Recently, though, some of that AI "scaling law" optimism has been replaced by fears that we may already be hitting a plateau in the capabilities of large language models trained with standard methods.
A weekend report from The Information effectively summarized how these fears are manifesting amid a number of insiders at OpenAI. Unnamed OpenAI researchers told The Information that Orion, the company's codename for its next full-fledged model release, is showing a smaller performance jump than the one seen between GPT-3 and GPT-4 in recent years. On certain tasks, in fact, the upcoming model "isn't reliably better than its predecessor," according to unnamed OpenAI researchers cited in the piece.
On Monday, OpenAI co-founder Ilya Sutskever, who left the company earlier this year, added to the concerns that LLMs were hitting a plateau in what can be gained from traditional pre-training. Sutskever told Reuters that "the 2010s were the age of scaling," where throwing additional computing resources and training data at the same basic training methods could lead to impressive improvements in subsequent models.
Elon Musk Is Already Doing Exactly What He Said He Would
The 32 Best Shows on Apple TV+ Right Now (November 2024)
Record labels unhappy with court win, say ISP should pay more for user piracy
The big three record labels notched another court victory against a broadband provider last month, but the music publishing firms aren't happy that an appeals court only awarded per-album damages instead of damages for each song.
Universal, Warner, and Sony are seeking an en banc rehearing of the copyright infringement case, claiming that Internet service provider Grande Communications should have to pay per-song damages over its failure to terminate the accounts of Internet users accused of piracy. The decision to make Grande pay for each album instead of each song "threatens copyright owners' ability to obtain fair damages," said the record labels' petition filed last week.
The case is in the conservative-leaning US Court of Appeals for the 5th Circuit. A three-judge panel unanimously ruled last month that Grande, a subsidiary of Astound Broadband, violated the law by failing to terminate subscribers accused of being repeat infringers. Subscribers were flagged for infringement based on their IP addresses being connected to torrent downloads monitored by Rightscorp, a copyright-enforcement company used by the music labels.
Bitcoin hits record high as Trump vows to end crypto crackdown
Bitcoin hit a new record high late Monday, its value peaking at $89,623 as investors quickly moved to cash in on expectations that Donald Trump will end a White House crackdown that intensified last year on crypto.
While the trading rally has now paused, analysts predict that bitcoin's value will only continue rising following Trump's win—perhaps even reaching $100,000 by the end of 2024, CNBC reported.
Bitcoin wasn't the only winner emerging from the post-election crypto trading. Crypto exchanges like Coinbase also experienced surges in the market, and one of the biggest winners, CNBC reported, was dogecoin, a cryptocurrency linked to Elon Musk, who campaigned for Trump and may join his administration. Dogecoin's value is up 135 percent since Trump's win.
Spotify’s Car Thing, due for bricking, is getting an open source second life
Spotify has lost all enthusiasm for the little music devices it sold for just half a year. Firmware hackers, as usually happens, have a lot more interest and have stepped in to save, and upgrade, a potentially useful gadget.
Spotify's idea a couple years ago was a car-focused device for those who lacked Apple CarPlay, Android Auto, or built-in Spotify support in their vehicles, or just wanted a dedicated Spotify screen. The Car Thing was a $100 doodad with a 4-inch touchscreen and knob that attached to the dashboard (or into a CD slot drive). All it could do was play Spotify, and only if you were a paying member, but that could be an upgrade for owners of older cars, or people who wanted a little desktop music controller.
But less than half a year after it fully released its first hardware device, Spotify gave up on the Car Thing due to "several factors, including product demand and supply chain issues." A Spotify rep told Ars that the Car Thing was meant "to learn more about how people listen in the car," and now it was "time to say goodbye to the devices entirely." Spotify indicated it would offer refunds, though not guaranteed, and moved forward with plans to brick the device in December 2024.
Review: The fastest of the M4 MacBook Pros might be the least interesting one
In some ways, my review of the new MacBook Pros will be a lot like my review of the new iMac. This is the third year and fourth generation of the Apple Silicon-era MacBook Pro design, and outwardly, few things have changed about the new M4, M4 Pro, and M4 Max laptops.
Here are the things that are different. Boosted RAM capacities, across the entire lineup but most crucially in the entry-level $1,599 M4 MacBook Pro, make the new laptops a shade cheaper and more versatile than they used to be. The new nano-texture display option, a $150 upgrade on all models, is a lovely matte-textured coating that completely eliminates reflections. There's a third Thunderbolt port on the baseline M4 model (the M3 model had two), and it can drive up to three displays simultaneously (two external, plus the built-in screen). There's a new webcam. It looks a little nicer and has a wide-angle lens that can show what's on your desk instead of your face if you want it to. And there are new chips, which we'll get to.
Keyboard and trackpad. The 16-inch model looks the same, just with a bigger trackpad. Credit: Andrew Cunningham "MacBook Pro" is etched on the bottom of the laptops. Credit: Andrew Cunningham Ports on the left: MagSafe, and two Thunderbolt 4 (for the M4) or Thunderbolt 5 (for the M4 Pro/M4 Max) ports. Credit: Andrew Cunningham On the right, an SD card reader, a Thunderbolt 4 or 5 port, and HDMI. Credit: Andrew CunninghamThat is essentially the end of the list. If you are still using an Intel-era MacBook Pro, I'll point you to our previous reviews, which mostly celebrate the improvements (more and different kids of ports, larger screens) while picking one or two nits (they are a bit larger and heavier than late-Intel MacBook Pros, and the display notch is an eyesore).
Calling all Ars readers! Your feedback is needed.
Many of you know that most of our staff is spread out all over these United States, but what you might not know is that it has been more than five years since many of us saw each other in meatspace. Travel budgets and the pandemic conspired to keep us apart, but we are finally gathering Team Ars in New York City later this week. We’d love for you to be there, too, in spirit.
As we gear up for our big fall meeting, we want to hear from you! We've set up a special email address, Tellus@arstechnica.com, just for reader feedback. We won’t harvest your email for spam or some nonsense—we just want to hear from you.
What would we like to hear about? We're eager to know your thoughts on what we're doing right, where we could improve, and what you'd like to see more (or less) of. What topics do you think we should be covering that we aren’t? Are we hitting the right balance in our reporting? Is there too much doom and gloom, or not enough? Feel free to be as specific and loquacious as you wish.