thegem
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/xjpiftx2okaf/public_html/wp-includes/functions.php on line 6121rocket
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/xjpiftx2okaf/public_html/wp-includes/functions.php on line 6121The post Researchers trained an OpenAI rival in half an hour for $50 appeared first on 311 Institute.
]]>
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
Fresh on the back of the release of China’s DeepSeek R1 Artificial Intelligence (AI) model which cost $5.6 Million to train – which is multiples less than other Foundation AI models – researchers in the US managed to create a low-cost AI reasoning model rivalling OpenAI’s o1 model in just 26 minutes, as outlined in a paper published last week. The model, called s1, was refined using a small dataset of 1,000 questions and for under $50, according to TechCrunch.
To do this, researchers at Stanford University and the University of Washington used a method known as distillation – which allows smaller models to draw from the answers produced by larger ones – to refine s1 using answers from Google’s AI reasoning model, Gemini 2.0 Flash Thinking Experimental. Google’s terms of service note that you can’t use Gemini’s API to “develop models that compete with” the company’s AI models.
The Future of Generative AI and AI 2040, by AI Keynote Matthew Griffin
The researchers based s1 on Qwen2.5, an open source model from Alibaba Cloud. They initially started with a pool of 59,000 questions to train the model on, but found that the larger data set didn’t offer “substantial gains” over a whittled-down set of just 1,000. The researchers say they trained the model on just 16 Nvidia H100 GPUs.
Breaking the AI Scaling Laws, Deepseek R1
The s1 model also uses a technique called test-time scaling, allowing the model to “think” for a longer amount of time before producing an answer. As noted in the paper, researchers forced the model to continue reasoning by adding “Wait” to the model’s response. “This can lead the model to doublecheck its answer, often fixing incorrect reasoning steps,” the paper says.
OpenAI’s o1 reasoning model uses a similar approach, something the buzzy AI startup DeepSeek sought to replicate with the launch of its R1 model that it claims was trained at a fraction of the cost. OpenAI has since accused DeepSeek of distilling information from its models to build a competitor, violating its terms of service. As for s1, the researchers claim that s1 “exceeds o1-preview on competition math questions by up to 27%.”
The rise of smaller and cheaper AI models threatens to upend the entire industry. They could prove that major companies like OpenAI, Microsoft, Meta, and Google don’t need to spend billions of dollars training AI, while building massive data centers filled with thousands of Nvidia GPUs.
The post Researchers trained an OpenAI rival in half an hour for $50 appeared first on 311 Institute.
]]>The post OpenAI o3 claims top 200 global programming ranking at Informatics Olympiad appeared first on 311 Institute.
]]>
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
New research from OpenAI highlights the results of their reasoning models (o-series models) and how LLMs have evolved from amateur competitive programmers to competing with the world’s best.
OpenAI’s latest AI model, o3, earned an impressive 2,724 rating on CodeForces, placing it in the 99.8th percentile. It also secured a gold medal-level score at the 2024 International Olympiad in Informatics (IOI).
According to the research, o3 outperforms the o1-ioi model, which is specifically fine-tuned for IOI. This proves that reinforcement learning is more effective than hand-crafted approaches.
The Future of Work, by Leadership Keynote Speaker Matthew Griffin
At IOI 2024, o3 competed under standard conditions and crossed the gold medal threshold. On CodeForces, it ranked among the top 200 programmers globally, competing with elite human coders.
“General-purpose reasoning capabilities developed through reinforcement learning are now outperforming carefully hand-crafted, domain-specific solutions,” said Ethan Mollick, associate professor at The Wharton School. “Rather than building specialised systems for specific tasks, large, general-purpose models can achieve superior results through better reasoning abilities.”
The research is part of OpenAI’s ongoing efforts to assess its models’ performance in competitive programming and broader software engineering.
Anthropic, the company behind the Claude model series, also released a report on Monday which highlighted AI’s influence on the workplace.
The findings revealed that approximately 36% of all occupations incorporate AI for at least a quarter of their tasks. Moreover, 57% of AI applications enhance human capabilities, while 43% focus on automation. However, only 4% of occupations rely on AI for at least 75% of their tasks.
The study identified software development and technical writing as the primary areas where AI is utilised. In contrast, AI plays a minimal role in tasks that involve physical interaction with the environment.
The post OpenAI o3 claims top 200 global programming ranking at Informatics Olympiad appeared first on 311 Institute.
]]>The post Researchers discover DeepSeek AI sending user data directly to China Mobile appeared first on 311 Institute.
]]>
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
The website of the sudden global smash hit Chinese Artificial Intelligence (AI) company DeepSeek, whose chatbot became the most downloaded app in the United States, has computer code that’s sending user login information to a Chinese state-owned telecommunications company that has been barred from operating in the US, security researchers say.
The web login page of DeepSeek’s chatbot contains heavily obfuscated computer script that when deciphered shows connections to computer infrastructure owned by China Mobile, a state-owned telecommunications company. The code appears to be part of the account creation and user login process for DeepSeek.
In its privacy policy, DeepSeek acknowledged storing data on servers inside the People’s Republic of China. But its chatbot appears more directly tied to the Chinese state than previously known through the link revealed by researchers to China Mobile. The US has claimed there are close ties between China Mobile and the Chinese military as justification for placing limited sanctions on the company. DeepSeek and China Mobile did not respond to emails seeking comment.
The Future of Cyber Security, by Cyber Keynote Speaker Matthew Griffin
The growth of Chinese-controlled digital services has become a major topic of concern for U.S. national security officials. Lawmakers in Congress last year on an overwhelmingly bipartisan basis voted to force the Chinese parent company of the popular video-sharing app TikTok to divest or face a nationwide ban though the app has since received a 75-day reprieve from President Donald Trump, who is hoping to work out a sale.
The code linking DeepSeek to one of China’s leading mobile phone providers was first discovered by Feroot Security, a Canadian cybersecurity company, which shared its findings with The Associated Press. The AP took Feroot’s findings to a second set of computer experts, who independently confirmed that China Mobile code is present. Neither Feroot nor the other researchers observed data transferred to China Mobile when testing logins in North America, but they could not rule out that data for some users was being transferred to the Chinese telecom.
Inside DeepSeek R1, by AI Keynote Matthew Griffin
The analysis only applies to the web version of DeepSeek. They did not analyze the mobile version, which remains one of the most downloaded pieces of software on both the Apple and the Google app stores.
The US Federal Communications Commission unanimously denied China Mobile authority to operate in the United States in 2019, citing “substantial” national security concerns about links between the company and the Chinese state. In 2021, the Biden administration also issued sanctions limiting the ability of Americans to invest in China Mobile after the Pentagon linked it to the Chinese military.
“It’s mindboggling that we are unknowingly allowing China to survey Americans and we’re doing nothing about it,” said Ivan Tsarynny, CEO of Feroot.
“It’s hard to believe that something like this was accidental. There are so many unusual things to this. You know that saying ‘Where there’s smoke, there’s fire’? In this instance, there’s a lot of smoke,” Tsarynny said.
Stewart Baker, a Washington, DC based lawyer and consultant who has previously served as a top official at the Department of Homeland Security and the National Security Agency, said DeepSeek “raises all of the TikTok concerns plus you’re talking about information that is highly likely to be of more national security and personal significance than anything people do on TikTok,” one of the world’s most popular social media platforms.
Users are increasingly putting sensitive data into Generative AI systems — everything from confidential business information to highly personal details about themselves. People are using generative AI systems for spell-checking, research, and even highly personal queries and conversations. The data security risks of such technology are magnified when the platform is owned by a geopolitical adversary and could represent an intelligence goldmine for a country, experts warn.
“The implications of this are significantly larger because personal and proprietary information could be exposed. It’s like TikTok but at a much grander scale and with more precision. It’s not just sharing entertainment videos. It’s sharing queries and information that could include highly personal and sensitive business information,” said Tsarynny, of Feroot.
The post Researchers discover DeepSeek AI sending user data directly to China Mobile appeared first on 311 Institute.
]]>The post CEOs today will be the last to manage all human workforces as AI agents get to work appeared first on 311 Institute.
]]>
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
Today’s CEOs are likely the last who will “manage a workforce of only human beings,” Salesforce CEO Marc Benioff told Axios’ Ina Fried in Davos. The rise of Generative AI agents which Benioff also described as “limitless digital labor” is among the next wave of advancements for the tech.
“We are really moving into a world now of managing humans and agents together,” he said, highlighting his own company’s Agentforce product which was launched in September along with its own sandbox where Salesforce’s own customers can test their agents, with Benioff applauding the technology at the time as “AI as it was meant to be.”
“Because I’m using Agentforce I just have that much more productivity,” he said, highlighting the increased ability of his Agentforce to resolve support inquiries, before adding that he’s thinking of ways to “redeploy” support agents in sales positions because those employees “don’t have as much to do because Agentforce is so productive for them.”
So, while I’ve talked about the unlimited human workforce many times before, as Artificial Intelligence (AI) automates jobs, tasks, and skills – such as coding and graphic design – and then democratises them for everyone this new “limitless” workforce adds a very interesting new wrinkle. It also begs questions such as: what happens when labour and the skills you and your business need are abundant, and what is the impact of that on company revenues and global GDP? As well as many other important questions.
The post CEOs today will be the last to manage all human workforces as AI agents get to work appeared first on 311 Institute.
]]>The post Elon Musk makes laughably serious hostile bid to buy OpenAI for $97 Billion appeared first on 311 Institute.
]]>
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
As if Elon Musk doesn’t have enough going on, a consortium of investors led by him announced plans Monday for what appears to be a hostile takeover of OpenAI. The investor group offered nearly $97.4 billion – which would make it one of the biggest M&A deals in history – to buy all of OpenAI’s assets and is “prepared to consider matching or exceeding higher bids,” it said in a press release sent to reporters.
OpenAI CEO Sam Altman isn’t having it. He immediately replied on X, “no thank you but we will buy twitter for $9.74 billion if you want,” and said that Musk was simply “trying to slow OpenAI down.”
The consortium includes Baron Capital Group, Valor Management, Atreides Management, Vy Fund III, Emmanuel Capital Management, and Eight Partners. While Emanuel Capital Management has a slimmer public profile, the others are firmly in Musk’s orbit.
Baron Capital Group, which manages multiple mutual funds, was founded by Ron Baron. The firm’s Baron Partners Fund, which he manages with his son Michael Baron, has large stakes in Tesla and SpaceX.
Atreides Management is associated with Boston-based hedge fund Atreides. As we previously reported, founder Gavin Baker spent 18 years at Fidelity where he made his first investment in SpaceX. Atreides has also invested in Tesla and Baker was a public supporter of Musk’s enormous Tesla pay package.
Valor Management was founded by Antonio Gracias, an early SpaceX investor and former Tesla board member. He was also an investor in Musk’s SolarCity before Tesla acquired it.
Vy Capital, founded by Alexander Tamas, also has a SpaceX stake and has invested in a number of other Musk companies like The Boring Company and Neuralink.
Eight Partners VC is better known as Joe Lonsdale’s firm 8VC, according to public filings. Lonsdale is a vocal fan of Musk and runs in similar circles. He recently appeared on CNBC calling himself “a huge fan” of Musk’s DOGE, an interview that Musk reposted on X.
It’s not yet clear how serious this group is. One plausible analysis floating around the internet is that this is as much trolling as offer. Some say this is Musk’s attempt to drive up the price that Altman’s team would have to pay to buy OpenAI’s underlying assets in order to restructure it from its original non-profit status.
Musk was part of the founding of OpenAI as a nonprofit AI research organization and Musk has been attempting to halt Altman’s restructuring plans.
The post Elon Musk makes laughably serious hostile bid to buy OpenAI for $97 Billion appeared first on 311 Institute.
]]>The post Microsoft research shows AI is damaging peoples critical thinking abilities appeared first on 311 Institute.
]]>
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
What if the most pressing danger of Artificial Intelligence (AI) is not its ability to replace jobs, as more than one in five US workers fear, but its potential to cause cognitive decline? Researchers at Microsoft and Carnegie Mellon University published a new study last month that claims to be the first to examine the effects of AI tools on critical thinking.
The researchers found that the more confident human beings were in AI’s abilities to get a task done, the fewer critical-thinking skills they used. Humans confident in AI left critical thinking to ChatGPT instead of doing it themselves and strengthening their cognitive abilities. And, bearing in mind that everyone is saying that critical thinking is one of THE most important soft skills to master to remain relevant in the future the results from this new study could have wide reaching impacts.
The Impact of AI on Jobs and Skills, by Leadership Keynote Speaker Matthew Griffin
“Used improperly, technologies can and do result in the deterioration of cognitive faculties that ought to be preserved,” the researchers wrote, adding that “a key irony of automation is that by mechanizing routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgment and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise.”
The researchers surveyed 319 knowledge workers, or workers who handle data or information, to find how confident they were in AI’s capabilities and how much critical thinking they employed when using AI to complete tasks. Critical thinking was defined as falling under one of six categories: Knowledge (remembering ideas), Comprehension (understanding ideas), Application (putting ideas to work in the real world), Analysis (contrasting and relating ideas), Synthesis (combining ideas), and Evaluation (judging ideas).
The surveyed knowledge workers used AI like ChatGPT at least once a week and gave 936 examples of how they used AI at work, ranging from looking up facts to summarising a text. They mainly used critical thinking to set clear prompts, refine prompts, and verify AI responses against external sources.
Six out of the seven researchers listed are associated with Microsoft Research, the research subsidiary of Microsoft created in 1991. Microsoft has deep interests in AI, with its investment in ChatGPT-maker OpenAI totalling close to $14 billion and its plans to spend $80 billion on AI data centers in the fiscal year ending in June.
The researchers caution that while AI can make workplaces more efficient, it could “also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI, raising concerns about long-term reliance and diminished independent problem-solving.”
In other words, AI has a hidden cost: It could lead workers to lose muscle memory for more routine tasks.
The post Microsoft research shows AI is damaging peoples critical thinking abilities appeared first on 311 Institute.
]]>The post DeepMind’s AlphaZero has taught itself how to control powerful quantum computers appeared first on 311 Institute.
]]>
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
Quantum computing and Artificial Intelligence (AI) are born bedfellows, but despite this chatter about Quantum Artificial Intelligence (QAI) so far has been very muted. Researchers and companies throughout the world are constantly working on developing this technology, which can solve extremely complicated problems that are too advanced for classical computers – such as how to use AI to control plasma in a state of the art Fusion Reactor …
One such group working on developing a quantum computer is at Aarhus University, a research group led by Professor Jacob Sherson who, following on from the use of AI to control a fusion reactor, used the same computer algorithm – AlphaZero from DeepMind – to actually control an entire quantum computing system to improve its efficiency.
The Future of AI, by AI Keynote Speaker Matthew Griffin
Quantum computers utilize quantum mechanics, which is a branch of physics that focuses on the smallest building blocks of our universe. One of the fundamental rules is that a system can exist in more than one state at a time. These rules get translated into computer language, and a quantum computer is able to perform multiple calculations at the same time. This means that a quantum computer can perform much faster than regular computers.
The theory of quantum computers has been established, but there has yet to be a full-scale quantum computer created. AlphaZero is capable of learning on its own without any interjection from humans. Because of this, the algorithm has been able to defeat both humans and complex computer programs in difficult games like Go, Shogi, and Chess. AlphaZero was able to do this by competing against itself and improving over time.
The algorithm was able to beat the leading chess program Stockfish after playing against itself for just four hours. After that impressive performance, Danish grandmaster Peter Heine Nielsen compared AlphaZero to a superior alien species.
The research group at Aarhus University has used computer simulations to demonstrate how AlphaZero can be applied to three different control problems which could possibly be used in a quantum computer.
“AlphaZero employs a deep neural network in conjunction with deep lookahead in a guided tree search, which allows for predictive hidden-variable approximation of the quantum parameter landscape. To emphasize transferability, we apply and benchmark the algorithm on three classes of control problems using only a single common set of algorithmic hyperparameters,” according to the study.
The research done by the team was published in Nature Quantum Information.
Lead Ph.D. student Mogens Dalgaard spoke about how the team was impressed with AlphaZero’s ability to quickly teach itself.
“When we analyzed the data from AlphaZero we saw that the algorithm had learned to exploit an underlying symmetry of the problem that we did not originally consider. That was an amazing experience.”
The real breakthrough came from pairing AlphaZero, which is an extremely impressive algorithm on its own, with a specialized quantum optimization algorithm.
According to Professor Jacob Sherson, “This indicates that we are still in need of human skill and expertise, and that the goal of the future should be to understand and develop hybrid intelligence interfaces that optimally exploits the strengths of both.”
The group wants to quicken the pace of development within the field, so they released the code and made it openly available. The move generated a lot of interest.
“Within a few hours I was contacted by major tech-companies with quantum laboratories and international leading universities to establish future collaboration” Jacob Sherson said, “So it will probably not be long until these methods will find use in practical experiments across the world.”
DeepMind is a UK-based Google sister-company that is responsible for both AlphaZero and AlphaGo. These systems are now showing their importance in other areas, including quantum computing.
The post DeepMind’s AlphaZero has taught itself how to control powerful quantum computers appeared first on 311 Institute.
]]>The post US Navy uses AI to train lasers to hit hypersonic missiles faster appeared first on 311 Institute.
]]>
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
The US Navy is helping to eliminate the need for a human operator to counter drone swarm attacks. An effort led by the Naval Postgraduate School (NPS) used Artificial Intelligence (AI) to make laser weapons better able to target and destroy multiple drone attacks.
With their ability to engage targets at the speed of light, lasers are being seriously developed by the major military powers as a counter to many threats – including for the development of actual force fields – not the least of which is the presence of increasingly sophisticated drones.
However, lasers are hardly a panacea, and they have a number of problems that need to be overcome if they are to become practical weapons. For starters, current laser systems require a human operator with a certain degree of finesse when it comes to identifying and firing on targets.
Essentially, the problem can be divided into two tasks. In the case of attacking drones, first is to identify what kind of drone it is in order to determine which weak spots to attack. The second is to train the laser beam on that weak spot long enough to destroy or neutralize the target – a tricky challenge that’s bound to get trickier as autonomous drones become quicker and more agile in flight – let alone as they start to be made from new nano-reflective materials.
Human operators still have a chance of succeeding against a single drone, but swarms of the things are another matter as the Chinese PLA recently discovered. True, a laser can flick from one target to the next in a fraction of a second, but identifying a weak spot and fixing the beam on it is another matter entirely. In a combat situation, a human operator would be quickly overwhelmed. As lasers advance to handle hypersonic missiles, the problem gets even worse.
As a collaboration between NPS, Naval Surface Warfare Center Dahlgren Division, Lockheed Martin, Boeing, and the Air Force Research Laboratory (AFRL), a new tracking system for anti-drone lasers is being developed that uses AI to overcome human limitations in not only targeting, but handling atmospheric distortions over long distances that can cause a laser beam to stray off target.
The team trained an AI system using a miniature model of a Reaper drone, 3D printed out of titanium alloy. This was scanned in infrared light and with radar to simulate how a full-sized drone would look through a telescope from various angles and distances under conditions of less-than-perfect visibility.
The image catalogs produced two datasets of 100,000 images that were used to train an AI system so that it could identify the drone, confirm its angle relative to the observer, seek out the weak spot, and fix the beam on that spot. Meanwhile, radar input provided data for determining the drone’s course and distance. A series of three AI training scenarios were then posed to train the system. The first used only synthetic data, the second a combination of synthetic and real-world data, and the third with only real-world data.
According to the US Navy, the third scenario worked the best with the least margin of error.
The next move will be field testing with radar and optical tracking of real targets with a semi-autonomous system with a human operator controlling some aspects of tracking.
“We now have the model running in real-time inside of our tracking system,” says Eric Montag, an imaging scientist at Dahlgren. “Sometime this calendar year, we’re planning a demo of the automatic aimpoint selection inside the tracking framework for a simple proof of concept,” Montag adds. “We don’t need to shoot a laser to test the automatic aimpoint capabilities. There are already projects — [The High Energy Laser Expeditionary (HELEX) demonstrator] being one of them — that are interested in this technology. We’ve been partnering with them and shooting from their platform with our tracking system.”
The research was published in Machine Vision and Applications.
The post US Navy uses AI to train lasers to hit hypersonic missiles faster appeared first on 311 Institute.
]]>The post Microsoft CEO says thirty years of progress have been condensed into three appeared first on 311 Institute.
]]>
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
Microsoft CEO Satya Nadella announced the formation of a new division called “CoreAI – Platform and Tools” at at the tech giant in response to what he sees as rapidly accelerating Artificial Intelligence (AI) development.
In an internal memo to employees, Nadella explained that changes that would normally take decades are now happening in just a few years.
“Thirty years of change is being compressed into three years!” Nadella wrote.
Think, Feel, Act, Disrupt, by Innovation Keynote Matthew Griffin
The new CoreAI department will unite Microsoft’s AI efforts across the company, combining the Copilot platform, AI agents, and developer tools. Microsoft is merging several teams to achieve this goal, including its development division, AI platform group, and core teams from the CTO’s office.
Former Meta executive Jay Parikh will lead CoreAI after spending more than eleven years as Meta’s Vice President and Global Head of Engineering. His primary task is to create what he calls an “End-to-End AI stack” for applications driven – and even created – by AI agents, though some in the AI industry debate the role of these agents.
Nadella believes that software development is about to change fundamentally. He expects new user interfaces and AI agents to build and run applications very differently from the way we do today. These systems will adapt their capabilities based on specific roles, business processes, and industries, potentially outperforming current software in both performance and security.
Microsoft will use its Azure cloud computing platform as the foundation for this AI infrastructure, which will see hundreds of billions of dollars worth of investment in the coming years, supporting tools like Azure AI Foundry, GitHub, and VS Code. AI agents will also be used to develop and maintain AI application code. CoreAI will continue to work on GitHub Copilot, using lessons learned from this AI coding tool to improve the broader AI platform.
The company calls this approach “Service as Software,” allowing software to develop custom applications or AI agents that can change SaaS applications on their own. According to Nadella, success in this new phase will depend on having better AI platforms, tools, and infrastructure.
The post Microsoft CEO says thirty years of progress have been condensed into three appeared first on 311 Institute.
]]>The post HR giant Workday launches a new platform to manage all your AI Agents appeared first on 311 Institute.
]]>
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
HR giant Workday is launching a new way for enterprises to keep track of all of their Artificial Intelligence (AI) agents in one place. It’s also launching a few more AI agents of its own, for good measure.
Silicon Valley-based Workday announced on Tuesday the release of Workday Agent System of Record, which is meant to help customers keep track of their AI agents in one control center, regardless of whether the agent is managed by Workday or by a third party like Salesforce.
This system gives companies visibility into what all of their agents are supposed to be doing, which tasks they are actually completing, and who at the company has access to which agents. The software tracks the impact of each AI agent and projected costs. Companies can also use the platform to turn agents on and off entirely or turn certain skills or tasks on or off.
22 of the BIGGEST Societal Changes by 2040, by Futurist Keynote Matthew Griffin
Workday CEO Carl Eschenbach said on a press briefing that as AI agents become an increasing part of an enterprise company’s workflow, companies should have a way to manage them just like people.
“The workforce is expanding,” Eschenbach said. “It’s no longer just human workers, it’s now digital workers, and we need to have a unified platform that manages your entire workforce going forward.”
Eschenbach added that while AI will be one of the biggest technological breakthroughs of our lifetime, enterprises are still nervous to adopt the tech. He said that they are worried they could lose control of the agents at their company or that agents could get access to data they aren’t supposed to have. Workday hopes this product can mitigate the stress.
Workday also announced a fleet of its own new AI agents that are focused on areas including payroll, contracts, and financial auditing, among others. The company previously released four agents in September that targeted areas including recruitment and talent mobility.
Workday’s new agents are meant to be more role-driven than task-driven like many other AI agents. David Somers, Workday’s chief product officer, said that having these agents more focused on a role, compared to a task, means they have specific knowledge and expertise and will be able to better collaborate with human employees.
“A lot of people talk about agents, but these agents today are doing a task. They’re just doing something that is repetitive and making it faster,” Eschenbach added. “The true power of agents is when they become role based, and these role-based agents will maybe start out with a skill, but they will, over time, have many skills, and this is how we will truly unlock the power of AI.”
Workday won’t charge enterprises for use of the AI agent managing platform but will charge for use of Workday AI agents, as it has in the past.
This announcement comes a week after Workday laid off 1,750 employees, about 8.5% of its headcount, with Eschenbach stating at the time that the world had changed and the company needs a new approach, which included looking to hire more AI talent.
The post HR giant Workday launches a new platform to manage all your AI Agents appeared first on 311 Institute.
]]>