Tuesday, 31 October 2023

The Morning After: Apple reveals new MacBook Pros, M3 chips and a new iMac

During its Scary Fast product event last night, Apple officially debuted its new M3, M3 Pro and M3 Max chips. The company is positioning the M3 chips as major upgrades over its M1 hardware — if you bought an M2 system, you’re probably not itching for a replacement just yet.

The M3’s GPU is the biggest leap forward, delivering new features, like hardware-accelerated ray tracing and mesh shading, enabling more realistic lighting and better geometry handling. If you’re into chip architecture and other fun endeavors, the M3 chips are also notable for being the first PC chips built on a three-nanometer process — both the M1 and M2 families are based on a 5nm process. This means more transistors packed into the same space, which helps with power efficiency, as well as providing better overall performance. The M3 series will feature in the revamped MacBook Pro 14-inch and 16-inch (more on those below), as well as the 24-inch iMac.

That new chip will make the new iMac up to twice as fast as its predecessor, but there aren’t too many upgrades elsewhere in the latest Mac. Apple is sticking with a 4.5K Retina display, for instance. There are some handy changes on the connectivity front, now with support for Wi-Fi 6E and Bluetooth 5.3 The new iMac starts at $1,299 and ships on November 7.

— Mat Smith

​​You can get these reports delivered daily direct to your inbox. Subscribe right here!​​

The biggest stories you might have missed

Sweeping White House executive order takes aim at AI’s toughest challenges

Sony’s WH-1000XM5 ANC headphones drop to $330

The best cheap phones for 2023

Avatar: Frontiers of Pandora’s adventurous spirit might just win you over

Apple kills off the 13-inch MacBook Pro

But it has new 14- and 16-inch models, don’t worry.

TMA
Apple

Apple’s updated line of 14- and 16-inch MacBook Pros features a range of new M3 chips and a new Space Black chassis. Was that the spooky part of Apple’s event?

The 14-inch MBP with a base M3 processor will cost $1,599 — the first time the 14-inch laptop has hit that low of a price. The M3 Pro iteration will still cost you $1,999, and prices go up from there for M3 Max options. Meanwhile, a base 16-inch MacBook Pro with an M3 Pro chip will have the same $2,499 starting price as its M2 Pro predecessor. Alas, the 13-inch version is no more. Farewell, Touch Bar. 

Continue reading.

Lenovo Smart Paper review

A solid e-ink tablet spoiled by the cost.

TMA
Engadget

In the last few years, we’ve seen Amazon get into e-ink scribes, while startups like ReMarkable have carved out their own niche with capable hardware for a reasonable price. Lenovo, having dabbled with e-ink on devices like the Yoga Book, has joined the fray with a dedicated device, the Smart Paper. While the product hasn’t yet launched in the US, the Smart Paper has launched elsewhere, including the UK. At around $400 (or £500 in the UK), it’s expensive. The hardware is impressive (and useful), but it’s all tainted by a subscription service that demands even more money.

Continue reading.

X won’t pay creators for tweets that get fact checked with community notes

The ‘slight change’ is the latest attempt to address misinformation.

X will no longer pay creators for tweets promoting misinformation. Elon Musk said the company is making a “slight change” to its monetization program, and tweets fact-checked via community notes will no longer be eligible for payouts.

The latest change comes as researchers, fact-checkers and journalists have raised the alarm about the amount of viral misinformation spreading on X amid the ongoing conflict in Israel and Gaza. Recent analysis from NewsGuard, a nonprofit that tracks the spread of misinformation, found 74 percent of “the most viral posts on X advancing misinformation about the Israel–Hamas war are being pushed by ‘verified’ X accounts.”

Continue reading.

This article originally appeared on Engadget at https://ift.tt/rd4RDVk

from Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/rd4RDVk

Monday, 30 October 2023

The Morning After: Samsung pays tribute to its flip phone past with limited-edition foldable

Samsung has unveiled the Galaxy Z Flip 5 Retro, a limited-edition foldable that pays homage to the SGH-E700 (AKA the SGH-E715 in the US), which came out 20 years ago in 2003. It has the same indigo blue and silver color combo as the original and a few special widgets, but it’s otherwise the same foldable flip phone from earlier this year. This special edition will go on sale in Korea and several countries in Europe, but not the US.

TMA
Samsung

The SGH-E700 was Samsung’s first mobile phone with an integrated antenna and became a certified hit, selling more than 10 million units. Weirdly, this isn’t even the first time Samsung has tugged at nostalgia strings with this phone: in 2007, Samsung effectively reissued the same phone with new radios as a nostalgia play, even though it was only four years old at the time.

— Mat Smith

​​You can get these reports delivered daily direct to your inbox. Subscribe right here!​​

The biggest stories you might have missed

How to customize the double tap gesture on Apple Watch

The best gadgets for your pets

Is streaming video even still worth it?

What the evolution of our own brains can tell us about the future of AI

What we got right (and wrong) about Elon Musk’s takeover of Twitter

One year later, it’s X.

TMA
Getty Images

Exactly one year has passed since Elon Musk, fresh off a months-long legal battle that forced him to buy the company, strolled into Twitter headquarters carrying a sink. We weren’t entirely sure what to expect. But there was no shortage of predictions about just how messy and chaotic Twitter might become under Musk’s leadership. The biggest twist, however, might be Meta making its Twitter rival, Threads, into a viable (if flawed) alternative. Karissa Bell walks through what did (and didn’t) happen when Musk took charge.

Continue reading.

Threads is working on an API for developers

Threads aims to be the place for public conversations online.

Threads was missing a lot of features users would expect from a service similar to Twitter (now X) when it launched. But over the past few months, it has added more new features, but as it still doesn’t have an API, third-party developers can’t create features with hooks into their services. For example, local transport agencies can’t automatically post service alerts when a train is delayed.

According to Instagram chief Adam Mosseri, though, Threads is working on an API for developers — he just has some reservations. He’s concerned the API’s launch could mean “a lot more publisher content and not much more creator content.” Mosseri may be hinting at the early days of Threads, where people’s feeds were dominated by brands and accounts with (presumably) social media staffers posting up a storm.

Continue reading.

Google’s default search engine status cost it $26 billion in 2021

The figure was revealed in the DOJ’s antitrust trial against the search giant.

Google VP Prabhakar Raghavan testified the company paid $26.3 billion in 2021 for maintaining default search engine status and acquiring traffic. Most of that likely went to Apple, in order to remain the default search option on iPhone, iPad and Mac.

Raghavan, who was testifying as part of the DOJ’s ongoing antitrust suit against the company, said Google’s search advertising made $146.4 billion in revenue in 2021, which puts the $26 billion it paid for default status in perspective. The executive added that default status made up the lion’s share of what it pays to acquire traffic.

Continue reading.

How to watch Apple’s Scary Fast event

The night time is the right time for new iMacs and laptops.

Apple’s holding another streaming event today, Monday October 30, at 8PM ET. Yes, that’s in the dead of night, and you can watch the stream on YouTube, on Apple’s website and on Apple TV devices. Here’s what you can expect to see.

Watch here.

This article originally appeared on Engadget at https://ift.tt/1sAQxpE

from Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/1sAQxpE

Sweeping White House executive order takes aim at AI's toughest challenges

The Biden Administration unveiled its ambitious next steps in addressing and regulating artificial intelligence development on Monday. Its expansive new executive order seeks to establish further protections for the public as well as improve best practices for federal agencies and their contractors.

"The President several months ago directed his team to pull every lever," a senior administration official told reporters on a recent press call. "That's what this order does, bringing the power of the federal government to bear in a wide range of areas to manage AI's risk and harness its benefits ... It stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world and like all executive orders, this one has the force of law."

These actions will be introduced over the next year with smaller safety and security changes happening in around 90 days and with more involved reporting and data transparency schemes requiring 9 to 12 months to fully deploy. The administration is also creating an “AI council,” chaired by White House Deputy Chief of Staff Bruce Reed, who will meet with federal agency heads to ensure that the actions are being executed on schedule.

Bruce Reed, Assistant to the President and Deputy Chief of Staff, walks to Marine One behind President Joe Biden, Wednesday, July 6, 2022, in Washington. Biden is traveling to Cleveland to announce a new rule that will allow major new financial support for troubled pensions that cover some 2 million to 3 million workers. (AP Photo/Patrick Semansky)
ASSOCIATED PRESS

Public Safety

"In response to the President's leadership on the subject, 15 major American technology companies have begun their voluntary commitments to ensure that AI technology is safe, secure and trustworthy before releasing it to the public," the senior administration official said. "That is not enough."

The EO directs the establishment of new standards for AI safety and security, including reporting requirements for developers whose foundation models might impact national or economic security. Those requirements will also apply in developing AI tools to autonomously implement security fixes on critical software infrastructure. 

By leveraging the Defense Production Act, this EO will "require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests," per a White House press release. That information must be shared prior to the model being made available to to the public, which could help reduce the rate at which companies unleash half-baked and potentially deadly machine learning products.

In addition to the sharing of red team test results, the EO also requires disclosure of the system’s training runs (essentially, its iterative development history). “What that does is that creates a space prior to the release… to verify that the system is safe and secure,” officials said.

Administration officials were quick to point out that this reporting requirement will not impact any AI models currently available on the market, nor will it impact independent or small- to medium-size AI companies moving forward, as the threshold for enforcement is quite high. It's geared specifically for the next generation of AI systems that the likes of Google, Meta and OpenAI are already working on with enforcement on models starting at 10^26 petaflops, a capacity currently beyond the limits of existing AI models. "This is not going to catch AI systems trained by graduate students, or even professors,” the administration official said.

What's more, the EO will encourage the Departments of Energy and Homeland Security to address AI threats "to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks," per the release. "Agencies that fund life-science projects will establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI." In short, any developers found in violation of the EO can likely expect a prompt and unpleasant visit from the DoE, FDA, EPA or other applicable regulatory agency, regardless of their AI model’s age or processing speed.

In an effort to proactively address the decrepit state of America's digital infrastructure, the order also seeks to establish a cybersecurity program, based loosely on the administration's existing AI Cyber Challenge, to develop AI tools that can autonomously root out and shore up security vulnerabilities in critical software infrastructure. It remains to be seen whether those systems will be able to address the concerns of misbehaving models that SEC head Gary Gensler recently raised.

AI Watermarking and Cryptographic Validation

We're already seeing the normalization of deepfake trickery and AI-empowered disinformation on the campaign trail. So, the White House is taking steps to ensure that the public can trust the text, audio and video content that it publishes on its official channels. The public must be able to easily validate whether the content they see is AI-generated or not, argued White House officials on the press call. 

AI generated image of penguins in a desert with Content Credentials information window open in upper right corner
Adobe

The Department of Commerce is in charge of the latter effort and is expected to work closely with existing industry advocacy groups like the C2PA and its sister organization, the CAI, to develop and implement a watermarking system for federal agencies. “We aim to support and facilitate and help standardize that work [by the C2PA],” administration officials said. “We see ourselves as plugging into that ecosystem.”

Officials further explained that the government is supporting the underlying technical standards and practices that will lead to digital watermarking’ wider adoption — similar to the work it did around developing the HTTPS ecosystem and in getting both developers and the public on-board with it. This will help federal officials achieve their other goal of ensuring that the government's official messaging can be relied upon.

Civil Rights and Consumer Protections

The first Blueprint for an AI Bill of Rights that the White House released last October directed agencies to “combat algorithmic discrimination while enforcing existing authorities to protect people's rights and safety,” the administration official said. “But there's more to do.” 

The new EO will require guidance be extended to “landlords, federal benefits programs and federal contractors” to prevent AI systems from exacerbating discrimination within their spheres of influence. It will also direct the Department of Justice to develop best practices for investigating and prosecuting civil rights violations related to AI, as well as, per the announcement, “the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis."

Additionally, the EO calls for prioritizing federal support to accelerate development of privacy-preserving techniques that would enable future LLMs to be trained on large datasets without the current risk of leaking personal details that those datasets might contain. These solutions could include “cryptographic tools that preserve individuals’ privacy,” per the White House release, developed with assistance from the Research Coordination Network and National Science Foundation. The executive order also reiterates its calls for bipartisan legislation from Congress addressing the broader privacy issues that AI systems present for consumers.

In terms of healthcare, the EO states that the Department of Health and Human Services will establish a safety program that tracks and remedies unsafe, AI-based medical practices. Educators will also see support from the federal government in using AI-based educational tools like personalized chatbot tutoring.

Worker Protections

The Biden administration concedes that while the AI revolution is a decided boon for business, its capabilities make it a threat to worker security through job displacement and intrusive workplace surveillance. The EO seeks to address these issues with “the development of principles and employer best practices that mitigate the harms and maximize the benefit of AI for workers,” an administration official said. “We encourage federal agencies to adopt these guidelines in the administration of their programs.”

Trabajadores en huelga en una protesta fuera de Paramount Pictures Studio el miércoles 13 de septiembre de 2023 en Los Angeles. Los estudios de Hollywood abandonaron las negociaciones con el sindicato de actores en huelga. (Foto Richard Shotwell/Invision/AP)
Richard Shotwell/Invision/AP

The EO will also direct the Department of Labor and the Council of Economic Advisors to both study how AI might impact the labor market and how the federal government might better support workers “facing labor disruption” moving forward. Administration officials also pointed to the potential benefits that AI might bring to the federal bureaucracy including cutting costs, and increasing cybersecurity efficacy. “There's a lot of opportunity here, but we have to to ensure the responsible government development and deployment of AI,” an administration official said.

To that end, the administration is launching on Monday a new federal jobs portal, AI.gov, which will offer information and guidance on available fellowship programs for folks looking for work with the federal government. “We're trying to get more AI talent across the board,” an administration official said. “Programs like the US Digital Service, the Presidential Innovation Fellowship and USA jobs — doing as much as we can to get talent in the door.” The White House is also looking to expand existing immigration rules to streamline visa criteria, interviews and reviews for folks trying to move to and work in the US in these advanced industries.

The White House reportedly did not preview the industry on this particular swath of radical policy changes, though administration officials did note that they had already been collaborating extensively with AI companies on many of these issues. The Senate held its second AI Insight Forum event last week on Capitol Hill, while Vice President Kamala Harris is scheduled to speak at the UK Summit on AI Safety, hosted by Prime Minister Rishi Sunak on Tuesday.

WASHINGTON, DC - SEPTEMBER 12: Senate Majority Leader Charles Schumer (D-NY) talk to reporters following the weekly Senate Democratic policy luncheon meeting at the U.S. Capitol on September 12, 2023 in Washington, DC. Schumer was asked about Speaker of the House Kevin McCarthy's announcement of a formal impeachment inquiry into President Joe Biden. (Photo by Chip Somodevilla/Getty Images)
Chip Somodevilla via Getty Images

At a Washington Post event on Thursday, Senate Majority Leader Charles Schumer (D-NY) was already arguing that the executive order did not go far enough and could not be considered an effective replacement for congressional action, which to date, has been slow in coming. 

“There’s probably a limit to what you can do by executive order,” Schumer told WaPo, “They [the Biden Administration] are concerned, and they’re doing a lot regulatorily, but everyone admits the only real answer is legislative.”

This article originally appeared on Engadget at https://ift.tt/TVb1zP6

from Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/TVb1zP6

Sunday, 29 October 2023

New report reveals details on the three M3 chips Apple may launch Monday night

Apple is planning to debut three M3 chips at its “Scary Fast” Mac event Monday night, according to Bloomberg’s Mark Gurman — the M3, M3 Pro and M3 Max. The event is set to kick off at 8 PM ET and is expected to bring multiple hardware announcements. Gurman previously reported that the company is prepping a new 24-inch iMac which could make an appearance tomorrow, along with upgraded MacBook Pros running the new M3 series.

In the Power On newsletter, Gurman writes that the standard M3 chip is likely to sport an eight-core CPU and 10-core GPU like the M2, but with improvements to performance speed and memory. He also notes the company is testing multiple configurations for both the M3 Pro and M3 Max chips. We may see an M3 Pro with 12-core CPU/18-core GPU and the option for a pricier 14-core CPU with a 20-core GPU. Meanwhile, the M3 Max could come with 16 CPU cores and either 32 or 40 GPU cores.

We won’t know anything for sure until Apple's unusually-timed October event starts tomorrow night. Thankfully, that’s not a long time to wait. Join us here to watch as it all unfolds.

This article originally appeared on Engadget at https://ift.tt/SZnbAWp

from Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/SZnbAWp

What the evolution of our own brains can tell us about the future of AI

The explosive growth in artificial intelligence in recent years — crowned with the meteoric rise of generative AI chatbots like ChatGPT — has seen the technology take on many tasks that, formerly, only human minds could handle. But despite their increasingly capable linguistic computations, these machine learning systems remain surprisingly inept at making the sorts of cognitive leaps and logical deductions that even the average teenager can consistently get right. 

In this week's Hitting the Books excerpt, A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains, AI entrepreneur Max Bennett explores the quizzical gap in computer competency by exploring the development of the organic machine AIs are modeled after: the human brain. 

Focusing on the five evolutionary "breakthroughs," amidst myriad genetic dead ends and unsuccessful offshoots, that led our species to our modern minds, Bennett also shows that the same advancements that took humanity eons to evolve can be adapted to help guide development of the AI technologies of tomorrow. In the excerpt below, we take a look at how generative AI systems like GPT-3 are built to mimic the predictive functions of the neocortex, but still can't quite get a grasp on the vagaries of human speech.

It's a picture of a brain with words over it
HarperCollins

Excerpted from A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains by Max Bennett. Published by Mariner Books. Copyright © 2023 by Max Bennett. All rights reserved.


Words Without Inner Worlds

GPT-3 is given word after word, sentence after sentence, paragraph after paragraph. During this long training process, it tries to predict the next word in any of these long streams of words. And with each prediction, the weights of its gargantuan neural network are nudged ever so slightly toward the right answer. Do this an astronomical number of times, and eventually GPT-3 can automatically predict the next word based on a prior sentence or paragraph. In principle, this captures at least some fundamental aspect of how language works in the human brain. Consider how automatic it is for you to predict the next symbol in the following phrases:

  • One plus one equals _____

  • Roses are red, violets are _____

You’ve seen similar sentences endless times, so your neocortical machinery automatically predicts what word comes next. What makes GPT-3 impressive, however, is not that it just predicts the next word of a sequence it has seen a million times — that could be accomplished with nothing more than memorizing sentences. What is impressive is that GPT-3 can be given a novel sequence that it has never seen before and still accurately predict the next word. This, too, clearly captures something that the human brain can _____.

Could you predict that the next word was do? I’m guessing you could, even though you had never seen that exact sentence before. The point is that both GPT-3 and the neocortical areas for language seem to be engaging in prediction. Both can generalize past experiences, apply them to new sentences, and guess what comes next.

GPT-3 and similar language models demonstrate how a web of neurons can reasonably capture the rules of grammar, syntax, and context if it is given sufficient time to learn. But while this shows that prediction is part of the mechanisms of language, does this mean that prediction is all there is to human language? Try to finish these four questions:

  • If 3x + 1 = 3, then x equals _____

  • I am in my windowless basement, and I look toward the sky, and I see _____

  • He threw the baseball 100 feet above my head, I reached my hand up to catch it, jumped, and _____

  • I am driving as fast as I can to LA from New York. One hour after passing through Chicago, I finally _____

Here something different happens. In the first question, you likely paused and performed some mental arithmetic before being able to answer the question. In the other questions, you probably, even for only a split second, paused to visualize yourself in a basement looking upward, and realized what you would see is the ceiling. Or you visualized yourself trying to catch a baseball a hundred feet above your head. Or you imagined yourself one hour past Chicago and tried to find where you would be on a mental map of America. With these types of questions, more is happening in your brain than merely the automatic prediction of words.

We have, of course, already explored this phenomenon—it is simulating. In these questions, you are rendering an inner simulation, either of shifting values in a series of algebraic operations or of a three-dimensional basement. And the answers to the questions are to be found only in the rules and structure of your inner simulated world.

I gave the same four questions to GPT-3; here are its responses (responses of GPT-3 are bolded and underlined):

  • If 3x + 1 = 3 , then x equals 1

  • I am in my windowless basement, and I look toward the sky, and I see a light, and I know that it is a star, and I am happy.

  • He threw the baseball 100 feet above my head, I reached my hand up to catch it, jumped, and caught it. It was a lot of fun!

  • I am driving as fast as I can to LA from New York. One hour after passing through Chicago, I finally get to see the Pacific Ocean.

All four of these responses demonstrate that GPT-3, as of June 2022, lacked an understanding of even simple aspects of how the world works. If 3x + 1 = 3, then x equals 2/3, not 1. If you were in a basement and looked toward the sky, you would see your ceiling, not stars. If you tried to catch a ball 100 feet above your head, you would not catch the ball. If you were driving to LA from New York and you’d passed through Chicago one hour ago, you would not yet be at the coast. GPT-3’s answers lacked common sense.

What I found was not surprising or novel; it is well known that modern AI systems, including these new supercharged language models, struggle with such questions. But that’s the point: Even a model trained on the entire corpus of the internet, running up millions of dollars in server costs — requiring acres of computers on some unknown server farm — still struggles to answer common sense questions, those presumably answerable by even a middle-school human.

Of course, reasoning about things by simulating also comes with problems. Suppose I asked you the following question:

Tom W. is meek and keeps to himself. He likes soft music and wears glasses. Which profession is Tom W. more likely to be?

1) Librarian

2) Construction worker

If you are like most people, you answered librarian. But this is wrong. Humans tend to ignore base rates—did you consider the base number of construction workers compared to librarians? There are probably one hundred times more construction workers than librarians. And because of this, even if 95 percent of librarians are meek and only 5 percent of construction workers are meek, there still will be far more meek construction workers than meek librarians. Thus, if Tom is meek, he is still more likely to be a construction worker than a librarian.

The idea that the neocortex works by rendering an inner simulation and that this is how humans tend to reason about things explains why humans consistently get questions like this wrong. We imagine a meek person and compare that to an imagined librarian and an imagined construction worker. Who does the meek person seem more like? The librarian. Behavioral economists call this the representative heuristic. This is the origin of many forms of unconscious bias. If you heard a story of someone robbing your friend, you can’t help but render an imagined scene of the robbery, and you can’t help but fill in the robbers. What do the robbers look like to you? What are they wearing? What race are they? How old are they? This is a downside of reasoning by simulating — we fill in characters and scenes, often missing the true causal and statistical relationships between things.

It is with questions that require simulation where language in the human brain diverges from language in GPT-3. Math is a great example of this. The foundation of math begins with declarative labeling. You hold up two fingers or two stones or two sticks, engage in shared attention with a student, and label it two. You do the same thing with three of each and label it three. Just as with verbs (e.g., running and sleeping), in math we label operations (e.g., add and subtract). We can thereby construct sentences representing mathematical operations: three add one.

Humans don’t learn math the way GPT-3 learns math. Indeed, humans don’t learn language the way GPT-3 learns language. Children do not simply listen to endless sequences of words until they can predict what comes next. They are shown an object, engage in a hardwired nonverbal mechanism of shared attention, and then the object is given a name. The foundation of language learning is not sequence learning but the tethering of symbols to components of a child’s already present inner simulation.

A human brain, but not GPT-3, can check the answers to mathematical operations using mental simulation. If you add one to three using your fingers, you notice that you always get the thing that was previously labeled four.

You don’t even need to check such things on your actual fingers; you can imagine these operations. This ability to find the answers to things by simulating relies on the fact that our inner simulation is an accurate rendering of reality. When I mentally imagine adding one finger to three fingers, then count the fingers in my head, I count four. There is no reason why that must be the case in my imaginary world. But it is. Similarly, when I ask you what you see when you look toward the ceiling in your basement, you answer correctly because the three-dimensional house you constructed in your head obeys the laws of physics (you can’t see through the ceiling), and hence it is obvious to you that the ceiling of the basement is necessarily between you and the sky. The neocortex evolved long before words, already wired to render a simulated world that captures an incredibly vast and accurate set of physical rules and attributes of the actual world.

To be fair, GPT-3 can, in fact, answer many math questions correctly. GPT-3 will be able to answer 1 + 1 =___ because it has seen that sequence a billion times. When you answer the same question without thinking, you are answering it the way GPT-3 would. But when you think about why 1 + 1 =, when you prove it to yourself again by mentally imagining the operation of adding one thing to another thing and getting back two things, then you know that 1 + 1 = 2 in a way that GPT-3 does not.

The human brain contains both a language prediction system and an inner simulation. The best evidence for the idea that we have both these systems are experiments pitting one system against the other. Consider the cognitive reflection test, designed to evaluate someone’s ability to inhibit her reflexive response (e.g., habitual word predictions) and instead actively think about the answer (e.g., invoke an inner simulation to reason about it):

Question 1: A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?

If you are like most people, your instinct, without thinking about it, is to answer ten cents. But if you thought about this question, you would realize this is wrong; the answer is five cents. Similarly:

Question 2: If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?

Here again, if you are like most people, your instinct is to say “One hundred minutes,” but if you think about it, you would realize the answer is still five minutes.

And indeed, as of December 2022, GPT-3 got both of these questions wrong in exactly the same way people do, GPT-3 answered ten cents to the first question, and one hundred minutes to the second question.

The point is that human brains have an automatic system for predicting words (one probably similar, at least in principle, to models like GPT-3) and an inner simulation. Much of what makes human language powerful is not the syntax of it, but its ability to give us the necessary information to render a simulation about it and, crucially, to use these sequences of words to render the same inner simulation as other humans around us.

This article originally appeared on Engadget at https://ift.tt/TQ3n4hV

from Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/TQ3n4hV

Saturday, 28 October 2023

NASA is launching a rocket on Sunday to study a 20,000-year-old supernova

A sounding rocket toting a special imaging and spectroscopy instrument will take a brief trip to space Sunday night to try and capture as much data as it can on a long-admired supernova remnant in the Cygnus constellation. Its target, a massive cloud of dust and gas known as the Cygnus Loop or the Veil Nebula, was created after the explosive death of a star an estimated 20,000 years ago — and it’s still expanding.

NASA plans to launch the mission at 11:35 PM ET on Sunday October 29 from the White Sands Missile Range in New Mexico. The Integral Field Ultraviolet Spectroscopic Experiment, or INFUSE, will observe the Cygnus Loop for only a few minutes, capturing light in the far-ultraviolet wavelengths to illuminate gasses as hot as 90,000-540,000 degrees Fahrenheit. It’s expected to fly to an altitude of about 150 miles before parachuting back to Earth.

The Cygnus Loop sits about 2,600 light-years away, and was formed by the collapse of a star thought to be 20 times the size of our sun. Since the aftermath of the event is still playing out, with the cloud currently expanding at a rate of 930,000 miles per hour, it’s a good candidate for studying how supernovae affect the formation of new star systems. “Supernovae like the one that created the Cygnus Loop have a huge impact on how galaxies form,” said Brian Fleming, principal investigator for the INFUSE mission.

“INFUSE will observe how the supernova dumps energy into the Milky Way by catching light given off just as the blast wave crashes into pockets of cold gas floating around the galaxy,” Fleming said. Once INFUSE is back on the ground and its data has been collected, the team plans to fix it up and eventually launch it again.

This article originally appeared on Engadget at https://ift.tt/NGvYA0U

from Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/NGvYA0U

Instagram head says Threads is working on an API for developers

Threads was missing a lot of features users would expect from a service similar to Twitter's (now X's) when it launched. Over the past few months, however, it has been been rolling out more and more new features to give users a more robust experience, including polls, an easy way to post GIFs and the ability to quote posts on the web. Still, since it doesn't have an API, third-party developers can't conjure features specific to their services that would make the social network a more integral part of people's everyday lives. An example of that is local transportation agencies being able to automatically post service alerts when a train is delayed. According to Instagram chief Adam Mosseri, though, Threads is working on an API for developers — he just has concerns about how it's going to be used. 

As first reported by TechCrunch, Mosseri responded to a conversation on the platform about having a TweetDeck-like experience for Threads. In a response to a user saying that Threads has no API yet, the executive said: "We're working on it." He added that he's concerned that the API's launch could mean "a lot more publisher content and not much more creator content," but he's aware that it "seems like something [the company needs] to get done."

Mosseri previously said that Threads won't amplify news, which may have been disappointing to hear for publishers and readers looking to leave X. Instead, he said, Threads wants to "empower creators in general." More recently, in an AMA he posted on the platform, Mosseri said that that his team's long-term aspiration is for Threads to become "the de facto platform for public conversations online," which means being both culturally relevant and big in terms of user size. He said he believes Threads has a chance of surpassing X, but he knows that his service has a long way to go. For now, he keeps his team focused on making people's experience better week by week. 

Mark Zuckerberg recently announced that Threads has "just under" 100 million monthly active users. Like Mosseri, he is optimistic about its future and said that there's a "good chance" it could reach 1 billion users over the next couple of years.

This article originally appeared on Engadget at https://ift.tt/FL1yoTZ

from Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/FL1yoTZ

Little Nightmares 3 Gameplay Trailer Shows Co-op in Horror Sequel

little nightmares 3 gameplay trailer

Supermassive GamesLittle Nightmares 3 has received an 18-minute gameplay trailer showing its co-op gameplay in a desert-like area called The Necropolis.

Sand: the real Little Nightmare

The pre-alpha footage is interesting for several reasons. Seeing the co-op in action is one, but the fairly bright color scheme refreshes the successful template that previous developer Tarsier Studios employed.

In Little Nightmares 3, you follow the journey of Low & Alone, as they search for a path that could lead them out of the Nowhere.

Trapped within the Spiral, a cluster of disturbing places, the two friends must work together to survive in a dangerous world full of delusions and escape the grasp of an even greater threat lurking in the shadows.

For the first time in the franchise, face your childhood fears together with a friend in online co-op, or solo with an AI companion.

Little Nightmares 3 will be released on PS4, PS5, Xbox One, Series X|S, Nintendo Switch, and PC in 2024.

The post Little Nightmares 3 Gameplay Trailer Shows Co-op in Horror Sequel appeared first on PlayStation LifeStyle.



from PlayStation LifeStyle https://ift.tt/59jTkYK

Friday, 27 October 2023

Alan Wake 2 DLC Expansions Will Release in 2024

alan wake dlc expansions

Remedy Entertainment has detailed its Alan Wake 2 DLC expansions and given them a release window of sorts.

Springs and Lakes for Alan Wake 2 DLC expansions

Two expansions will be released in 2024. The first, Night Springs, is due to come out in Spring 2024, while the second, The Lake House, has a more tentative 2024 date.

There was, however, more concrete details on what the expansions would be about. For Night Springs, yes, Alan Wake fans, it is about that in-game TV show.

“Visions and dreams. Fiction is written and coming true. Fiction collapses and remains just words on a page. These are those stories… in Night Springs. Play as several familiar characters from the world of Alan Wake and experience the unexplainable in multiple self-contained episodes of Night Springs, a fictional TV-show set in the world of Alan Wake.”

For The Lake House, it’ll be a return to another familiar Alan Wake spot in Cauldron Lake.

“The Lake House is a mysterious facility situated on the shores of Cauldron Lake set up by an independent government organization to conduct secret research… until something goes wrong. Explore the Lake House and embark on two separate adventures as the realities of Saga Anderson and Alan Wake collide again.”

There’s been plenty of chatter about what comes next for Alan Wake 2 after its launch today. A New Game Plus mode known as The Final Draft will allow players to keep unlocked upgrades and weapons, provide a new Nightmare difficulty mode, and, most intriguingly, a new alternative narrative, including new Manuscript pages and new video content.

Remedy Entertainment’s Sam Lake also recently told fans at EGX that there will be significant free DLC for the game alongside paid expansions. The additions of The Final Draft certainly fit the bill.

Alan Wake 2 is out now on PCPS5, and Xbox Series X|S. It is currently available in digital form only.

The post Alan Wake 2 DLC Expansions Will Release in 2024 appeared first on PlayStation LifeStyle.



from PlayStation LifeStyle https://ift.tt/z6i2JKc

Hertz decides to slow down its EV expansion

In 2021, Hertz announced that it was going to order 100,000 electric vehicles from Tesla by the end of 2022. Turns out the car rental company is far from being able to reach that number still, and it may take a while to get to 100,000, if it even gets there, because it's slowing down its plans to electrify its fleet. During the company's third-quarter earnings call (PDF), CEO Stephen Scherr said Hertz's "in-fleeting of EVs will be slower than [its] prior expectations."

Hertz reported a 13 percent margin for the quarter, which Scherr said would've been "several points higher" if not for the cost challenges associated with EVs. One of the factors that affected the company's margins was depreciation, compounded by the one-third drop in retail prices of the electric cars in its fleet. Tesla had implemented several price cuts over the past year, slashing the Model S and X prices by nearly 20 percent in September.

In addition, the CEO said that EVs are costing Hertz "about twice in terms of damage cost repair than a conventional internal combustion engine vehicle." He said the company is working directly with Tesla to look at its cars' performance and lower the risk of damage, as well when it comes parts procurement and labor. The company disclosed in its earnings report that 80 percent of its EVs is made up of Tesla vehicles, which means it has 35,000 Tesla in its fleet out of 50,000. As CNBC explains, EVs come with their own set of maintenance challenges, potentially brought about by their heavier weight. Aside from those two factors, moving a portion of its EV fleet from ridesharing use to leisure had affected its margins, as well. Hertz rents Tesla EVs to Uber and Lyft drivers, and it's now planning to move the vehicles it removed from the pool back to its ridesharing business.

Scherr said Hertz remains committed to its long-term plan to electrify its fleet, but it's going to pace itself while it looks for solutions to its EV-related issues. The CEO talked about how taking on EVs by other manufacturers like GM could address some of the problems it's facing. He expects Hertz to be able to purchase them at an "appreciably lower price point" than the prices it paid for its Tesla vehicles. He also thinks that those cars "will likely speak to lower incidence of damage," as well as to "a lower cost of parts and labor." GM and other traditional automakers have a broad parts supply network nationwide established over the decades, which will make it easier — and potentially cheaper due to aftermarket availability — to procure components.

This article originally appeared on Engadget at https://ift.tt/QbFLA5c

from Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/QbFLA5c

Thursday, 26 October 2023

PS5 System Update 23.02-08.20.00 Released for October 2023

PS5 system software update

Sony rolled out PS5 system software update 23.02-08.20.00 this morning with some minor improvements and new features. One of the new features actually makes it easier for players to check for new PS5 features!

PS5 system update 23.02-08.20.00 patch notes (October 2023)

Complete patch notes are as follows:

  • Music in the control center is now much easier to use. With its two-column layout, you can easily browse various categories, playlists, and songs.
  • You can avoid unintentionally broadcasting your surroundings while broadcasting games with your PlayStation VR2.
    • Go to Settings > Accessories > PlayStation VR2, and turn on Block See-Through View on TV to prevent the camera see-through image from being displayed on your TV while using your PS VR2.
  • If you say “What’s new?” via Voice Command on any screen, you can now check for new PS5 features.
    • Currently, Voice Command (Preview) is only available in English for players with accounts for PlayStation Network in the US and UK.
  • We’ve improved the messages and usability on some screens.

Sony hasn’t said when Voice Command will roll out worldwide but regardless of where you’re based, if you have a U.S. or U.K. PSN account, you can preview the feature right now provided you speak in English. Other languages should become available when the current testing phase is complete.

The post PS5 System Update 23.02-08.20.00 Released for October 2023 appeared first on PlayStation LifeStyle.



from PlayStation LifeStyle https://ift.tt/QLX1muB

The Morning After: Meta’s Threads reaches almost 100 million active users

Meta’s Threads continues to grow, all while the service it aped, X, continues to splutter and fall apart. Mark Zuckerberg said that Threads currently has “just under” 100 million monthly active users and that the app could reach 1 billion users in the next couple of years.

Threads picked up 100 million sign-ups in its first week, with easy ways to create an account from your existing Instagram profile. However, engagement dropped off amid complaints about limited functionality and feeds flooded with unwanted posts from brands and users with big audience numbers on Instagram. I was not interested in the piecemeal thoughts of startup execs with a podcast. Shocking, I know.

Meta has since steadily added new features, and engagement seems to have rebounded in recent weeks as Elon Musk continues to make unpopular changes to X, like stripping headlines from links and well, all the other things

– Mat Smith

​​You can get these reports delivered daily direct to your inbox. Subscribe right here!

​​

The biggest stories you might have missed

Black Friday 2023: The best early deals

The White House will reportedly reveal a ‘sweeping’ AI executive order on October 30

iOS 17.1 is here with improvements to AirDrop and new flair for Apple Music

Apple will reportedly bring ANC to its 'regular' AirPods next year

X is rolling out an audio and video calling feature nobody asked for

What did we just say?

X (formerly known as Twitter) has begun rolling out yet another feature nobody asked for. Now, users will have the option to call each other via audio and video calls on the platform. This doesn't come as a total surprise, as CEO Linda Yaccarino previously confirmed that video chat would be coming to the social media site back in August. The best explanation for the addition is Elon Musk’s aim to make X the “everything” app – a one-stop shop for multiple features and services.

Continue reading.

DJI's Osmo Pocket 3 camera features a 1-inch sensor and a rotating display

It also offers 4K 120p video and ActiveTrack 6.0 stabilization.

TMA
Engadget

DJI's Osmo Pocket 3 gimbal camera has arrived with major updates over the previous model, adding a much larger 1-inch sensor that should greatly improve image quality. It also packs a new 2-inch display with 4.7 times the area of the last model. That said, It's also significantly more expensive than the Pocket 2 was at launch. It costs $520 in the US, $170 more than the Pocket 2.

Continue reading.

Apple TV+ prices have doubled in just over a year

Apple One, Arcade and News+ plans are now more expensive too.

The price of Apple TV+ is going up by $3 per month to $10. The annual TV+ plan has risen from $69 to $99. Apple Arcade is now $7 per month instead of $5. As for Apple News+, that'll now run you $13 per month for a standalone subscription, up from $10. The cost of an Apple TV+ subscription previously went up from $5 per month to $7 in October 2022, meaning that the price of the service has doubled in just over 12 months.

Continue reading.

TikTok's first live 'global music event' will feature Cardi B and Charlie Puth

In The Mix will take place in Arizona on December 10.

TMA
TikTok

TikTok In The Mix will take place in Mesa, Arizona on December 10 – the first global live music event from the video platform. The headliners are Cardi B, Niall Horan, Anitta and Charlie Puth, with surprise guests and performances by emerging artists. Followers of the four headliners will get presale codes to buy In The Mix tickets starting on October 27. The general sale will start on November 2 and TikTok will stream the event live on its app too.

Continue reading.

This article originally appeared on Engadget at https://ift.tt/bIKSwrX

from Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/bIKSwrX

Wednesday, 25 October 2023

The Honda Prelude returns as a concept EV

Honda has brought its iconic Prelude back in the form of a new concept EV, a two-door coupe that looks surprisingly ready for production, the company announced. No details about the powertrain were revealed, but Honda said it represents a preview of the company's future EV lineup and demonstrates its commitment to driver-focused performance.

The Prelude concept was revealed at the end of Honda's Tokyo Mobility Show presentation without many details, other than the appearance. It resembles the latest Honda Civic, particularly in the front end. It's less angular though, retaining the smoother lines that later versions of the original Prelude were known for. Other notable visual cues include bulging fenders, regular side mirrors (not cameras), a small spoiler and blacked out windows. The latter probably means that the concept doesn't have much in the way of an interior yet. 

The original Prelude put Honda on the map for front-wheel-drive performance, famously coming in second to the Porsche 944 in a 1984 Car and Driver shootout (while beating a Ferrari 308, Lotus Esprit, two other Porsches and a Toyota Supra in the process). It was discontinued in 2001, with the final US model offering 200 horsepower.  

Honda was very slow, reluctant even, to embrace electric cars — bringing the breakthrough Honda E to market was an uphill battle. And that vehicle likely won't get a follow-up, as Honda said earlier this year that it would focus on SUVs instead. However, CEO Toshihiro Mibe made clear that the Prelude concept represents the company's way forward in terms of sporty EVs. 

"The word 'prelude' means an 'introductory or preceding performance,'" he said. "This model will become the prelude for our future models which will inherit the 'joy of driving' into the full-fledged electrified future and embody Honda's unalterable sports mindset. The Prelude Concept is a specialty sports model that will offer [an] exhilarating experience."

Those comments suggest that the company will eventually built the Prelude, or something like it. That would be a way for Honda to move into EVs while still embracing its enthusiast performance heritage. 

 

This article originally appeared on Engadget at https://ift.tt/GD7ea3I

from Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/GD7ea3I

Qualcomm's new audio chip uses Wi-Fi to massively extend headphone range

In addition to the Snapdragon 8 Gen 3 and the Snapdragon X Elite, Qualcomm has also introduced the S7 and S7 Pro Gen 1 at the Snapdragon Summit in Hawaii. The company said its new chips deliver six times the compute power of their predecessor's, along with on-device AI capabilities. More intriguing, perhaps, is the S7 Pro's micro-power Wi-Fi connectivity, which will apparently allow users to "walk around a home, building or campus while listening to music or making calls." 

As The Verge notes, the chip uses Qualcomm's Expanded Personal Area Network (XPAN) technology that can automatically switch a device's connection. When a user strays too far from their phone while their earbuds are connected to it via Bluetooth, for instance, XPAN switches the connection to a Wi-Fi access point. It can deliver 96kHz lossless audio via earbuds, Qualcomm's Dino Bekis told the publication, and it works with 2.4, 5 and 6GHz bands. Bekis also said that users only have to click on a prompt once to connect their earbuds powered by the chip to their Wi-Fi.

Outside of the S7 Pro's Wi-Fi connectivity, the platforms' on-board AI enable better responsiveness to the listener's environment if they want to hear ambient sounds. But if they want to block out their environment completely, the chips are supposed to be capable of Qualcomm's "strongest ever ANC performance" regardless of earbud fit. 

These features will only be enabled when headsets, earbuds and speakers powered by the S7 and S7 Pro are paired with devices equipped with the new Snapdragon 8 Gen 3 mobile platform and Snapdragon X Elite, though. That means we won't be seeing products with the new sound chips on the market anytime soon. When they do come out, they'll most likely be meant for Android devices, seeing as Apple has its own ecosystem.

This article originally appeared on Engadget at https://ift.tt/3osARmu

from Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/3osARmu

Tuesday, 24 October 2023

New tool lets artists fight AI image bots by hiding corrupt data in plain sight

From Hollywood strikes to digital portraits, AI's potential to steal creatives' work and how to stop it has dominated the tech conversation in 2023. The latest effort to protect artists and their creations is Nightshade, a tool allowing artists to add undetectable pixels into their work that could corrupt an AI's training data, the MIT Technology Review reports. Nightshade's creation comes as major companies like OpenAI and Meta face lawsuits for copyright infringement and stealing personal works without compensation.

University of Chicago professor Ben Zhao and his team created Nightshade, which is currently being peer reviewed, in an effort to put some of the power back in artists' hands. They tested it on recent Stable Diffusion models and an AI they personally built from scratch.

Nightshade essentially works as a poison, altering how a machine-learning model produces content and what that finished product looks like. For example, it could make an AI system interpret a prompt for a handbag as a toaster or show an image of a cat instead of the requested dog (the same goes for similar prompts like puppy or wolf).

A range of poisoned samples show the original art and how an AI could interpret them.
Professor Ben Zhao | University of Chicago

Nightshade follows Zhao and his team's August release of a tool called Glaze, which also subtly alters a work of art's pixels but it makes AI systems detect the initial image as entirely different than it is. An artist who wants to protect their work can upload it to Glaze and opt in to using Nightshade. 

Damaging technology like Nightshade could go a long way towards encouraging AI's major players to request and compensate artists' work properly (it seems like a better alternative to having your system rewired). Companies looking to remove the poison would likely need to locate every piece of corrupt data, a challenging task. Zhao cautions that some individuals might attempt to use the tool for evil purposes but that any real damage would require thousands of corrupted works. 

This article originally appeared on Engadget at https://ift.tt/zw5p8UX

from Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/zw5p8UX

Chevy offers $1,400 to Bolt EV owners who endured lower charging levels

GM has announced that it will pay $1,400 to owners of 2020-2022 Bolt EVs and EUVs who endured a recall that limited range to 80 percent for a significant period of time, Electrek has reported. It's effectively an upfront payment to customers as part of an expected class action settlement. 

"GM is announcing a compensation program for 2020-22 Bolt EV/EUV owners upon installation of the final advanced diagnostic software as part of the original battery recall," a spokesperson wrote in a statement. "Owners are eligible to receive a $1,400 Visa eReward card upon installation. This applies to Bolt EV/EUV owners in the US only. We’re grateful to our customers for their patience and understanding."

Owners must install a "software final remedy" by December 31, 2023 and sign a legal release — those who decline will have to wait for the class action lawsuit to play out. If the settlement ends up being more than $1,400, those who accept the payment will still receive the difference.

It seems like Chevy's Bolt EVs (and larger EUVs that came along in 2021) have never not had problems with their batteries. The 2017-2019 models had serious defects that could cause fires, forcing GM to recall them and install special software, reducing maximum charge levels to 90 percent. 

The 2020-2022 models affected by the lawsuit used new battery chemistry with a different issue that could also cause a fire when the car was fully, or nearly fully charged. GM issued a recall for those models as well, installing diagnostic software that would reduce maximum charging levels to 80 percent (cutting range from about 259 miles to 207 miles). The software will eventually either warn customers that their battery pack needs to be replaced, or automatically return the maximum charge to 100 percent. 

The problem is, the vehicles needed to reach 6,200 miles of use before the final assessment. That could be years for some buyers, and GM mandated that owners complete the diagnostic by March 2025 in order to qualify for an extended warranty or replacement battery, if needed. 

GM announced earlier this year that it was discontinuing the Bolt EV amid the company's shift to the Ultium platform, possibly because it felt the name had been sullied by the battery issues. Following an outcry, though, it backtracked and said that a next-gen Bolt was in the works — showing that people still liked what the Bolt stood for (a practical, cheap EV with decent range) despite the recalls. 

Presumably, any potential settlement would cover owners who effectively lost the full and expected use of their vehicle during the period. If you're part of the recall, you should receive a letter soon with more information and a unique PIN to access their site — more information is available here

This article originally appeared on Engadget at https://ift.tt/y4hYfZX

from Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/y4hYfZX

NASA's OSIRIS-REx mission collected more Bennu asteroid samples than first thought

NASA has revealed that it has already processed 70.3 grams of rocks and dust collected by the OSIRIS-REx mission from asteroid Bennu. That means the mission has way exceeded its goal of bringing 60 grams of asteroid samples back to Earth — especially since NASA scientists have yet to open the primary sample container that made its way back to our planet in September. Apparently, they're struggling to open the mission's Touch-and-Go Sample Acquisition Mechanism (TAGSAM) and could not remove two of its 35 fasteners using the tools currently available to them. 

The scientists are processing the samples inside a specialized glovebox (pictured above) with a flow of nitrogen in order to keep them from being exposed to our atmosphere and any contaminants. They can't just use any implement to break the container's fasteners open either: The tool must fit inside the glovebox, and it also must not compromise the samples' integrity. NASA has sealed the primary container sample for now, while it's developing the procedure to be able to open it over the next few weeks. 

If you're wondering where the 70.3 grams of rocks and dust came from, well, NASA collected part of it from the external sample receptacle but outside TAGSAM itself. It also includes a small portion of the samples inside TAGSAM, taken by holding down its mylar flap and reaching inside with tweezers or a scoop. NASA's initial analysis of the material published earlier this month said it showed evidence of high carbon content and water, and further studies could help us understand how life on Earth began. The agency plans to continue analyzing and "characterizing" the rocks and dust it has already taken from the sample container, so we may hear more details about the samples even while TAGSAM remains sealed.

This article originally appeared on Engadget at https://ift.tt/ZjXz389

from Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/ZjXz389

Monday, 23 October 2023

Tinder will let your family nag you and play virtual matchmaker

Tinder has rolled out a new feature dubbed “Tinder Matchmaker” that will allow users’ family and friends to access the dating app and make recommendations for potential matches. The matchmakers do not need to have a Tinder profile to view or suggest possible pairings. Hypothetically, that means anyone from your grandmother to your ex-boyfriend could help you select a new profile to match with.

A Tinder user will need to launch a “Tinder Matchmaker session” either directly from a profile card or within the app’s settings. If you see a potential match, you can share a unique link with up to 15 individuals in a 24-hour period. Once a matchmaker gets a link, they can log into Tinder or continue as a guest.

A matchmaker will gain access to profiles they can “like” and if they do, it will appear as a recommendation for the original Tinder user to see. The matchmaker’s abilities are limited though. They can't send messages or actually swipe right on the profiles in question – ultimately, the Tinder user will decide whether or not to match with another. 

“For years, singles have asked their friends to help find their next match on Tinder, and now we're making that so easy with Tinder Matchmaker," Melissa Hobley, Tinder's Chief Marketing Officer says on the new feature.

Bumble has a similar offering, where a user can recommend a profile to a friend through a private link that only they can open within the dating app. However, it’s more geared for one-on-one sharing compared to Tinder Matchmaker. Hinge, another key competitor, tried launching a separate Hinge Matchmaker app in 2017. Matchmakers on the Hinge spinoff were supposed to suggest potential pairings based on who the individuals knew personally from Facebook. That secondary app didn't last for Hinge – the app is no longer available.

Tinder’s matchmaker feature is just the latest offering from the company designed to entice more users to engage with the app in new ways. Verification on Tinder got a boost with video selfies, incognito mode finally was introduced earlier this year and the company just started letting Tinder users specify gender pronouns and non-monogamous relationship types.

This article originally appeared on Engadget at https://ift.tt/wJsSlgc

from Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/wJsSlgc

Instagram's latest test feature turns users' photos into stickers for Reels and Stories

Instagram is testing a sticker creation feature that will let users make custom stickers from their own photos — and other users’, in some cases — and pop them into Reels or Stories. While Meta has been going all in on prompt-based, AI-generated stickers lately, this tool is something much simpler. It’ll just select the subject of a photo and remove the background, creating a free-floating sticker that can be placed over other content.

A demonstration of Instagram's sticker creation tool showing a French bulldog selected as a sticker
Adam Mosseri/ Instagram

Adam Mosseri gave a brief demonstration of how it’ll work in a video shared to his broadcast channel. He also said that, in addition to creating stickers from photos saved on your phone, users will be able to make them from “eligible images you see on Instagram.” Mosseri didn’t share any further details on that, but it suggests users will be able to opt in to making their pictures stickerable.

It’s still just a test and hasn’t rolled out to all users, so we’ll see what that actually looks like in time. The platform last week started testing a new polling feature, too, which will show up in the comments section under feed posts.

This article originally appeared on Engadget at https://ift.tt/6RiBsK7

from Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/6RiBsK7