- Welcome Guest
- Sign In
In the ever-evolving world of technology, 2024 brought some exciting innovations alongside an alarming number of trends that expose the pitfalls of our current tech culture.
From overhyped AI gimmicks to privacy erosion and unsustainable hardware practices, here are some of the worst tech trends of 2024 that have frustrated consumers and industry leaders and are unlikely to abate next year.
Generative AI dominated 2023, but by 2024, the trend spiraled into absurdity. Countless companies have rolled out AI-powered tools that address non-existent problems — or create entirely new ones.
AI now generates everything from poorly edited videos and unintelligible blog posts to automatically written emails that require human intervention to fix. Tools that claim to offer productivity boosts often result in inefficiencies because of their flawed outputs.
The flood of low-quality AI products has undermined trust in genuinely helpful AI innovations. Small businesses and consumers alike are overwhelmed by tools with overblown marketing promises.
Many of these deficient AI solutions add another layer of automation without offering real value. This overproduction has created noise, making distinguishing truly transformative tools from mere gimmicks harder.
Former Oracle CEO Larry Ellison once famously said, “Privacy is Dead.” However, privacy has been resurrected and killed more times than a Tyrannosaurus Rex in a “Jurassic Park” sequel.
Digital privacy continues to erode in 2024 as big tech companies push the boundaries of data collection under the guise of personalization. This year, the rise of AI-driven surveillance tools has become particularly concerning. Facial recognition is now integrated into everything from retail stores to public transportation systems without sufficient regulation or oversight.
Hyper-targeted ads across platforms and connected technologies have reached a tipping point. New technologies scrape data from various devices at unprecedented levels, often without users’ consent or precise opt-out options. For instance, smart home devices have increasingly come under fire for tracking conversations and usage patterns far beyond their intended purpose.
Perhaps most worrying is the resurgence of the “we’re improving your experience” excuse. Tech companies increasingly bypass GDPR-like protections by using convoluted terms of service agreements that make opting out prohibitively tricky. This unfortunate phenomenon sets a dangerous precedent for future interactions between consumers and technology.
Most tech users will identify with this trend. In 2024, the “everything-as-a-service” model has reached absurd new heights.
From software to hardware, companies are turning more and more products into monthly subscriptions. Consumers are now paying subscriptions for products that were traditionally one-time purchases: car manufacturers charging for heated seats, printers requiring monthly fees to unlock ink usage, and even smart home locks demanding ongoing payments to access advanced features.
The subscription model has become synonymous with monetizing basic functionality. What started with streaming platforms has now spread to nearly every product category. It has become overwhelming, financially unsustainable, and increasingly frustrating for many consumers. Companies risk alienating their customer base by prioritizing recurring revenue over user experience.
Tech companies have revived a troubling trend of overhyping products that don’t exist in usable forms. This year has been marked by grand promises of game-changing devices and services that either underdeliver or never materialize.
One example is the push for AI PCs, where marketing campaigns tout devices with unmatched capabilities that remain largely theoretical. Similarly, augmented reality (AR) platforms have made headlines, yet most consumers still lack meaningful use cases beyond demo videos and niche applications.
This trend mirrors the vaporware hype of the early 2000s, where buzzwords like “digital transformation” were attached to half-baked products. In 2024, buzzwords such as “quantum-ready” and “AI-powered” are increasingly slapped onto underdeveloped offerings to ride the tech wave, undermining consumer trust.
While I am optimistic about the rise of PCs (both Windows and Mac, x86, Arm or Apple Silicon-based) with integrated AI technology at the silicon level, the jury is still out if mainstream consumers have drunk the AI Kool-Aid.
The unsustainable tech upgrade cycle will worsen in 2025. Major hardware manufacturers continue to push minor annual refreshes of devices while retiring older models earlier than necessary. Smartphones, laptops, and wearables now seem designed for obsolescence, forcing users to replace functional devices far too soon.
This approach has generated alarming levels of electronic waste. Consumers face limited repair options as companies lock down parts and restrict third-party fixes, leading to devices being thrown away rather than repaired. Additionally, the push for disposable devices contradicts the industry’s public commitments to sustainability.
In parallel, new hardware launches often emphasize gimmicky features, like foldable screens or AI-generated wallpapers, that offer little utility. Meanwhile, genuine performance improvements are increasingly incremental, leaving users questioning whether upgrades are worth the cost.
AI surveillance tools have seen rapid adoption, particularly in workplaces and schools. Employers increasingly turn to AI monitoring software to track productivity by analyzing keystrokes, screen activity, and facial expressions. This invasive approach erodes trust between employers and employees while normalizing intrusive surveillance practices.
Similarly, schools have begun implementing AI tools to monitor students’ attention and behavior, often with flawed algorithms. These technologies reinforce punitive environments and disproportionately impact vulnerable communities. Critics argue that such systems prioritize control over genuine engagement or well-being.
Social media algorithms in 2024 have become worse than ever, prioritizing engagement metrics over quality content. Platforms are flooded with clickbait, misinformation, and sensationalized posts designed to keep users scrolling endlessly. Genuine connection — once the core promise of social media — has been replaced by a relentless pursuit of ad revenue.
Adding insult to injury, platforms have ramped up the push for paid verification and algorithmic boosts, forcing creators to pay for visibility. This pay-to-play model exacerbates inequality in content discovery, pushing smaller creators to the margins.
While technology has the potential to improve lives, 2024 has brought forth trends that emphasize profit, surveillance, and short-term gains over long-term innovation and ethical considerations.
From the glut of useless AI tools to worsening e-waste and dystopian surveillance practices, it’s clear that the tech industry needs a course correction.
Consumers, regulators, and innovators alike must push for responsible, meaningful advancement since ignoring it will allow these trends to define the future of technology.
While artificial intelligence has juiced the marketing departments of smartphone makers like Apple and Samsung, it isn’t generating much enthusiasm among users, according to a survey released Monday by a used electronics selling site.
The survey by SellCell of more than 2,000 iPhone and Samsung users found that 73% of iPhone and 87% of Samsung users said that the AI features on their phones added little to no value to their smartphone experience.
Users’ low opinion of the AI on their phones reflects confusion in the market. “While companies are saying ‘now with AI’ or ‘AI included,’ they’re not telling users what to do with it,” said HP Newquist, executive director of The Relayer Group, a business consulting firm in New York City.
“They’re telling users, you now have access to AI. You can now use AI,” he told TechNewsWorld. “They’re just saying, here it is. You’ve got it now. And quite frankly, that’s not a compelling reason to use AI.”
“We’re getting AI thrust at us, and I think consumers are completely nonplussed by that,” he observed.
“We’re finding the same exact thing in corporate America,” he continued. “They’re getting told, you need to use generative AI. You need to use agentic AI. But they’re not being told how specifically it can benefit them. Until that’s made clear both at the consumer and the corporate level, you are going to have a fairly tepid response from first-time users.”
Privacy concerns may be dampening enthusiasm about AI among iPhone users, contended Mark N. Vena, president and principal analyst at SmartTech Research in Las Vegas. “Apple users have high expectations for data protection and skepticism about whether the features offer meaningful improvements beyond what competitors already provide,” he told TechNewsWorld.
“Limited compatibility, with AI features likely restricted to newer iPhone models, may also alienate users of older devices,” he added.
On the Samsung side of things, Vena continued, Galaxy AI lacks differentiation from other Android-based AI offerings, which may reduce excitement. “Samsung’s features might appear incremental rather than groundbreaking.”
“Additionally, inconsistent user experiences with Samsung’s software and AI across devices could contribute to lower enthusiasm, compared to the more tightly integrated Apple ecosystem,” he said.
Greg Sterling, co-founder of Near Media, a market research firm in San Francisco, asserted that one of the central problems with Apple Intelligence is that it’s not well explained or well understood by the public. “Apple needs to do more to educate people about what the features are and when they will be available,” he told TechNewsWorld.
Tim Bajarin, president of Creative Strategies, a technology advisory firm in San Jose, Calif., agreed. “AI integration in smartphones is new and not well understood by the average user,” he told TechNewsWorld. “Google and Apple need to do more tutorial-like posts that show users the new AI features and how to use them.”
“AI requires you to learn how to prompt, and it’s not easy,” added Rob Enderle, president and principal analyst at the Enderle Group, an advisory services firm in Bend, Ore.
“So we have a lot of training in front of us with regard to users knowing how to use this stuff,” he told TechNewsWorld. “I would expect the survey to be bad this early simply because Apple Intelligence hasn’t been available for very long, and people just don’t know how to use it yet.”
Sterling added that the multiple features clustered under the rubric Apple Intelligence are rolling out incrementally over time, so users haven’t really seen the concrete benefits yet. “In a year or two, I suspect this survey would have different outcomes,” he predicted.
Will Kerwin, an equity analyst with Morningstar Research Services in Chicago, also cited the drawn-out rollout of Apple Intelligence as a source of consumer apathy toward AI on their iPhones. “We believe it’ll take consumers time to fully bake in how Apple Intelligence is most useful to them and adapt personal habits,” he told TechNewsWorld.
“This all informs our view that Apple iPhone sales driven by AI will be stronger in fiscal 2026 than they are currently in fiscal 2025,” he said.
Runar Bjørhovde, an analyst with Canalys, a global market research company, added: “The stark reality is that most people don’t buy phones because of AI. They buy because of different features.”
“If we think of the type of features that AI has enabled, they are not that interesting right now,” he told TechNewsWorld.
“It’s honestly not that surprising right now that AI features might disappoint people a bit because they’re not as advanced in reality as some of the marketing and messaging say they are,” he said.
Bjørhovde maintained that many tech firms are having an “existential crisis,” where they’ve lost the huge hype and interest that people have had in them for the last 20 years.
“They have to come up with new stories to try and get people interested,” he contended. “So, AI is a gold mine right now. I believe it can give us some really interesting innovations in a few years. But for now, it is this marketing bubble where people don’t actually know what to believe.”
The SellCell survey also found that about one in six iPhone users (16.8%) said they would consider switching to Samsung if it offered better AI features. In contrast, only 9.7% of Samsung users said they’d consider moving to Apple for better AI features.
It added that the percentage of users loyal to Apple has declined from 92% in 2021 to 78.9% now. That compares to a decline from 74% to 67.2% over the same period for Samsung.
“In general, the excitement around Apple’s annual upgrade cycle has declined a lot,” said Ross Rubin, the principal analyst at Reticle Research, a consumer technology advisory firm in New York City.
“These AI features are an attempt to inject something new and exciting into the experience,” he told TechNewsWorld. “But consumers are looking for a baseline of functionality and don’t think the platform is as much of an issue anymore.”
Still, the finding that so many Apple users might be willing to jump ship for AI is surprising, he acknowledged. “Apple users just tend to be far more likely to opt into Apple services,” he explained. “Because of the App Store investments, you can’t necessarily move all that stuff to another platform. So that makes the reported greater willingness to switch surprising.”
However, not everyone sees Apple’s fan base as waning. “We don’t see brand loyalty slipping in our surveys,” Bajarin declared. “We expect Apple to have a blockbuster holiday season, with iPhone sales and drawing many ‘switchers’ to the Apple ecosystem.”
“We also don’t think loyalty to Apple is going away,” Kerwin added. “In our view, iPhone users are significantly likely to remain iPhone users, and AI features are just another means of locking them into Apple’s ecosystem.”
Many amazing products launched this year, and we’ll cover some of them. However, unlike most years when the decision was difficult, one product stood out well above the rest as revolutionary. What made it particularly special was that it was a bold bet that paid off and helped its manufacturer become one of, and for a time, the most valuable company in the world.
Typically, I don’t do a Product of the Week for this column as it focuses on the Product of the Year, but I ran into a confection that I’ve become almost addicted to, thanks to my love of dark chocolate, so I’m compelled to squeeze it in. It’s from a small company called Sweet as Fudge. So, before we revisit 2024 and discuss some of the standout products, let’s talk about this chocolate.
The product is Dark Chocolate Triple-Dipped Malt Balls. There’s also a sugar-free version that’s surprisingly good. A pound costs around $15, but these things are so good. I keep them in the refrigerator so they melt more slowly in my mouth. If you are into chocolate and like malt (chocolate malt is my go-to drink in the summer), give them a try this holiday season.
Now, on to the main event. Let’s begin with the contenders that didn’t quite make the top of my list and conclude with the one that earned the title of Product of the Year.
It’s a shame this company didn’t survive the year, but the Fisker Ocean — which I had planned to buy — was arguably the best electric SUV on paper. It had a bunch of cool, unique features like a solar panel roof and a pull-up table so the driver could enjoy a snack or meal in the car without making a mess.
This eSUV was well-designed and had decent range, and I thought it was one of the best-looking vehicles in the market. Operating inefficiencies and apparently some financial mismanagement killed the company and the car. RIP Fisker.
I bought a used 2022 Audi e-tron GT after Fisker went under. Right after I bought it, Audi released the 2025 model, and it is a beast. It has nearly 1,000 HP, giving it blazing performance (more than I need as my base car is quick enough), hydraulic suspension allowing it to adjust its height nearly instantly, and the car is beautiful.
Oh, they added around 100 miles of range over my car, making it far more useful for a daily driver. At $171,000, it is out of my budget range, but wow, what a car!
I think Hyundai’s approach with the Ioniq 5 N electric car is the way to go for those who miss the engine sound and gear shifting in a manual gas car. This Hyundai is arguably the most fun electric car in the world. It does a decent job faking engine sounds, and shifting feels like you are shifting a car — even though this is emulated and actually makes the car slower. But it is incredibly fun to drive and reasonably quick (0 to 60 in 3.25 seconds).
The Hyundai Ioniq 5 N isn’t badly priced for a performance car that lists under $70,000, and it’s likely quicker than your buddy’s far more expensive exotic car.
Wow, talk about an offering that changed the world! ChatGPT is the core technology under both Apple and Microsoft’s AI efforts. Its Dall-E capability for images is impressive (as is Google’s Gemini, which I also use), and it has changed the way many of us write software, create images, and even write complete works.
ChatGPT has been advancing incredibly quickly, and with artificial general intelligence (AGI) it promises to change how we create what we read. ChatGPT and its peers are starting to change what we watch. It is a truly revolutionary platform, though Google’s Gemini Advanced is no slouch either. While ChatGPT launched before 2024, it became the basis for both Microsoft’s and Apple’s huge AI efforts this year.
Ever since the PC came out, we’ve essentially been asking to go back to terminals. We love the freedom the PC gives us, but we aren’t crazy about the complexity of supporting them ourselves, particularly at scale. Microsoft helped create the problem, and, to be fair, it has mitigated a lot of the related pain by improving Windows.
However, Windows 365 Link is a nearly instant, on-terminal-like device that promises and mostly delivers Windows capabilities with the reliability and simplicity of a terminal. It’s kind of a “Back to the Future” product and a bit of a man-bites-dog story. I expect this to be the future of the PC eventually.
Ever since Elon Musk bought Twitter, many of us have been looking for an alternative. Each time we got excited about a new effort, we were disappointed.
Bluesky has been the closest thing to a better Twitter so far, and more and more people I know have migrated to it. Given its distributed architecture, it also has some technical advantages over X/Twitter.
Its moderation, in my opinion, appears to be better than Twitter’s. It was a huge step in the right direction, and I hope it makes it into our future.
The Google Pixel 9 Pro Fold is now the phone I carry, and it is awesome. The updates to the phone have been helpful. It locks if it moves suddenly, as if someone stole it, for instance. It also locks if you are reading in a plane when it takes off, but you can just log in again. It folds out into a small tablet, so if I don’t have my glasses handy, I can still make the text large enough to read. Speaking of reading, it is the best e-book alternative since the discontinued Microsoft Duo.
I wish Google had used a Qualcomm Snapdragon instead of its own processor, but the difference isn’t as bad as it was with the first Fold. I’m just loving this phone.
Huawei has demonstrated remarkable ingenuity by navigating technological restrictions to deliver one of the most desirable smartphones in the world, despite some initial growing pains because this was almost an entirely new platform.
Given that Huawei wasn’t allowed to buy the technology from the U.S., this three-screen phone that went from iPhone to iPad size is a technical marvel. It showcases where we will likely go with phones and set itself apart from competitive devices this year.
Huawei remains impressive, considering the problems posed by the conflicts between the U.S. and China this year. Impressive work.
Lenovo is another company that stood out this year in terms of sheer innovation. The product that most caught my eye was the ThinkBook Auto-Twist AI PC Concept.
What makes this PC different is that it automatically pans the screen and on-board camera based on where you are in the room. This feature is particularly handy for those of us who have to move around during team calls or when you need to watch a how-to video while doing something else.
Watching that screen swivel automatically so it remained focused on the user was awesome. As AI advances, I expect we’ll see even more innovation when it comes to AI PCs.
For me, 2024 was defined by AI innovations. The HP Print AI offering, released in September, directly addresses a lot of our aggravation with printers, automatically formatting things for pages without cutting off the borders and positioning the data and graphics so that each print job is perfect.
We’ve all had issues when printing a document and not having it lay up properly. Spreadsheets are the worst, often printing one line or column on a page and then kicking out hundreds of useless print copies that make little or no sense, like this:
HP Print AI will auto-format print jobs, so the printed document is useful. The AI analyzes your print job based on past training to determine the optimal format. Then, it auto-configures for that result so each print job is as perfect as the AI can make it, like this:
Printer technology needs to be moved into this decade, and this software from HP should do that. Expect to see more advances like this from many companies that plan to use AI to address customer frustrations with their products.
The BDX robots are part of Project Groot and use Nvidia’s Jetson robotic technology. You’ll see them at Disney parks, and eventually, you’ll be able to buy one.
This little guy is the closest to a real “Star Wars”-like robot I’ve ever found. I wish I’d been smart enough to invest in Lucas Films back then rather than spending my money on going to that first “Star Wars” movie over and over and over again. BDX showcased how far robotics has come, and we’ll be seeing a lot of amazing robots you can buy in 2025.
When Nvidia CEO Jensen Huang first presented Blackwell, it was a fantastic event that showcased how a company should release a revolutionary product. This GPU is massive in terms of performance and power requirements, and it is forcing a gigantic pivot from air to water cooling in cloud and enterprise data centers as the world pivots to AI.
The backstory on this part and Nvidia’s entire AI effort is a tale of legend. Back in the early 2000s, when only IBM was really working with AI, Jensen Huang became convinced it was a much more near-term future event. For much of the next 20 years, Nvidia’s financial performance dragged as the company basically bet its future on AI. Had Huang not been a founder, he likely would have been fired.
Then OpenAI asked for help, and Nvidia provided it. Thanks to Nvidia, we now have generative AI that works. Blackwell is the current culmination of this work, a massive GPU that is extremely power-hungry but also incredibly efficient. While it uses a ton of power, it does much more work than any group of other GPUs or NPUs can do with that same power.
AMD’s Threadripper CPU was equally innovative, but it was designed for existing market needs. Nvidia was working on Blackwell before AI became a market, and that’s just never done. By taking what seemed to be an unreasonable risk and executing on it, Nvidia caught its competitors sleeping, and now Nvidia is nearly synonymous with AI.
Back when there were pensions, CEO compensation was more reasonable, and boards more supportive of long-term strategic moves, an effort like this wouldn’t have been that unusual. But in today’s world, you just don’t see that.
Nvidia’s Blackwell effort gave me hope that the U.S. might be able to return to a more strategic future. It caught the imagination of a world increasingly focused on and concerned about AI. Blackwell was a once-in-a-generation leap in performance and a massive bet that could have gone badly, so it is my choice for Product of the Year.
As a lifelong New York Giants fan, it’s been hard to suffer through the 2024 season, culminating last weekend in the Giants’ most recent debacle, losing to the below-average New Orleans Saints on a botched field goal in the last seconds of the game.
In my disgust in the aftermath of the game, it occurred to me: Is present-day Intel the equivalent of the 2024 Giants? It sounds like a ridiculous question, but the similarities are eerie.
Let’s face it: The two titans of their industries — the New York Giants in professional football and Intel in technology — have struggled through severe scrutiny and poor performance the past few years. Both were once at the top of their fields, making headlines and defining periods, so it’s easy to draw analogies between them.
For the Giants, simply beyond the sheer shoddiness of the on-field performance over the past few years (the team hasn’t been to the Super Bowl since 2012), management made one of the most idiotic decisions of all time before the season began by extending a questionable long-term contract to “franchise” quarterback Daniel Jones and allowing Saquon Barkley to sign with its divisional rival, the Philadelphia Eagles. Now, Barkley is having one of the greatest seasons of all time for a running back.
As for Intel, the company has struggled to maintain market share in the PC space over the past few years, conceded the smartphone space after it passed over Apple’s request for a suitable silicon solution for its iPhone in 2007 (which would have ecosystem ramifications that Apple has taken advantage of), not to mention missing the overall industry movement to Arm-based architectures for mobile devices and even laptops.
Both organizations are currently under fire for their (at least perceived) inability to give fans and customers a modicum of faith that turnarounds were in the making. Although there are similarities between their difficulties, a deeper examination shows that Intel’s problems are essentially distinct from the New York Giants’ of 2024 and are being addressed in a way that distinguishes the company from them.
The New York Giants, a legendary NFL team that has won four Super Bowls, was under tremendous pressure going into the 2024 campaign.
Recent years have been characterized by inconsistent play, dubious coaching choices, and poor player development. In today’s NFL, the team has had difficulty adjusting when creative play-calling and analytics-driven tactics are paramount.
The Giants have mostly failed to take advantage of their chances despite brief flashes of potential, which has created discouraged supporters and experts doubtful of their prospects, as well as exasperating season ticket holders like me.
Intel used to be the undisputed leader in its industry. The company literally controlled the semiconductor market for many years, establishing the chip innovation and performance benchmark. But a slew of upheavals in the 2020s put its hegemony in jeopardy. The emergence of rivals like AMD and Nvidia and the advanced manufacturing technology pioneered by Taiwan Semiconductor have compelled Intel to confront its weaknesses.
The leading cause of Intel’s problems is the company’s delay in switching to sophisticated manufacturing nodes. Due to setbacks with its 10nm and 7nm nodes, Intel lost market share in essential categories, while Taiwan Semiconductor and Samsung advanced with their state-of-the-art 5nm and 3nm processes. These challenges were exacerbated by the increasing use of Arm-based architectures, especially in AI and mobile applications, where Intel’s x86 architecture has struggled to stay competitive.
Although the Giants and Intel face formidable obstacles, their responses distinguish them. The Giants have frequently seemed hapless, switching quarterbacks and coaches in an attempt to find a short-term solution. Due to their inability to develop a clear plan of action, fans and experts are beginning to doubt the franchise’s long-term survival.
In contrast, Intel has attempted to take serious action to overcome its obstacles. Under the direction of CEO Pat Gelsinger, the company launched a daring plan to regain its place at the forefront of the semiconductor industry.
The core of this endeavor is Intel’s IDM 2.0 strategy, which aims to increase its role as a foundry for third-party clients while modernizing its manufacturing capabilities. By doing this, Intel hopes to take on Taiwan Semiconductor and Samsung head-to-head as a manufacturing giant and chip designer.
Additionally, Intel has increased its focus on cutting-edge technologies. Its attempts to create specialized chips for data centers and its investments in AI-specific hardware, such as the Gaudi AI accelerators, demonstrate a proactive approach to the upcoming wave of computing innovation. In fairness to Intel, these actions have revealed a business willing to own up to its mistakes while working to influence the future rather than merely responding to it.
An organization’s ability to overcome hardships is largely dependent on its leadership. With numerous coaching staff changes and a front office that frequently appears out of step with the team’s demands, the Giants have had difficulty establishing a permanent leadership structure. This unpredictability has led to a lack of direction and identity on the field. Watch any of the Giants’ losses over the past few seasons, and it’s hard to dispute this.
In contrast, Intel enjoyed reasonable unity and support when Pat Gelsinger rejoined the company. Gelsinger prioritized a return to Intel’s engineering foundation while cultivating an innovative and accountable culture. Ambitious aims and a willingness to take chances characterized his tenure, which contrasts sharply with the Giants’ more cautious strategy.
The Giants and Intel are both burdened by their histories. The Giants’ rich past makes them feel both proud and burdened, which makes their recent setbacks even more disappointing. Because of the team’s illustrious background, supporters find it challenging to make sense of its current hardships in light of its former success.
Being a pioneer in its industry comes with expectations, which Intel also struggles with. The impact of the company’s errors is exacerbated by its standing as a technology innovator. However, Intel’s heritage offers distinct advantages, including a wealth of technical know-how, solid industry ties, and a still enviable reputation, especially with legacy PC OEMs like HP, Dell, and Lenovo. These resources have put Intel in a position to build on its prior achievements and focus on future expansion.
The timelines of their various sectors represent one of the most considerable distinctions between Intel and the Giants. NFL teams follow an annual cycle, and their fortunes frequently fluctuate depending on how one season turns out. Failures are front-page news, and because of their immediacy, it has become challenging for the Giants to bounce back from in the near future.
Timelines are lengthier in the tech sector, though. Years pass during semiconductor development cycles, and strategic choices cannot have their full effects for ten years.
Intel has more time to accomplish its ambitions and bounce back from setbacks because of this longer horizon. While Intel’s problems have been more gradual and (in theory) allow for course correction and progressive development, Wall Street is typically not patient, and investors get nervous when they don’t sense positive signs of leading indicators like market share gains and revenue increases.
Despite its struggles, Intel is not a business that is content to let things go. Intel is setting itself up for a long-term resurgence with its IDM 2.0 strategy, AI initiatives, and redoubled emphasis on silicon excellence.
Some now contend that Intel will never regain its position as the semiconductor industry leader, and its issues are so complicated that they might not be resolved. Due to the company’s manufacturing delays, AMD and Nvidia have increased market share, further widening the gap as Intel prepares for the 18A production phase.
Furthermore, Intel’s foundry sector has had trouble attracting customers, which has made its recovery attempts more difficult. Pat Gelsinger’s resignation highlights the need for strong leadership and creative ideas after his tenure saw a significant drop in stock value. Restoring investor trust and industry stature will need strategic restructuring and a fresh emphasis on execution, which will be highly challenging due to internal resistance to whoever takes over as Intel’s leader.
It’s easy to forget that many analysts welcomed Gelsinger’s return to Intel in 2021 with hope because they thought his familiarity with the firm, his grasp of the silicon industry, his focus on customers, and his visionary attributes were precisely what was required to turn the giant around.
However, under his direction, Intel had endeavored to overcome several obstacles, such as a lag in manufacturing improvements and heightened competition from rivals like AMD and Nvidia. Due to these problems, Intel’s stock value significantly dropped, wiping out almost $150 billion in market capitalization.
Although some have claimed that Gelsinger just needed more time to carry out his plan effectively, the company’s board thought differently and finally decided that a drastic change in direction, starting with a change in CEO, was required.
Despite being interesting and even amusing, the connection between Intel and the 2024 New York Giants ultimately falls short considering all that.
Even if both organizations are going through difficult times, Intel’s approach shows a degree of strategic vision and flexibility that the Giants have not yet shown. Intel is building the foundation for a future that solidifies its position as a leader in the technology industry, not just battling to remain relevant. If Intel is a behemoth, it is undergoing reinvention rather than decline, which it must do if the company is to grow.
There are reasons to be optimistic for Intel. Its Lunar Lake family of processors is showing favorable performance and battery life comparisons to Apple Silicon and even offerings from Qualcomm, which has made a great deal of favorable news with its Snapdragon Elite solutions for laptops.
Intel’s incoming CEO, whoever that might be, will have one of the greatest corporate turnaround challenges in tech history. The company will have to dramatically cut headcount, which makes Intel’s cut of 15,000 people earlier in the year look like a pinprick.
Intel seems committed to its foundry strategy, which will require years of investment before it yields significant returns. In a post-Biden Administration world, the company may be unable to rely on the federal government for further investment in its foundry business. To top all of that, some customers may not be comfortable with Intel’s “church and state” strategy of manufacturing non-Intel chips in Intel factories.
Intel’s chances for success will largely depend on its new leader. I advise hiring someone from the outside who is not an Intel insider who might be influenced by legacy Intel personnel who have developed a survival mentality and are reluctant to take risks. Intel’s new CEO will likely be the most-watched tech hire of 2025, as their leadership will provide critical insights into the company’s future.
The new CEO will also have to deal with a management team who have remained the many cuts the company has gone through and might be unwilling to make the necessary changes Intel must undertake, as legacy management will be in “survivor” mode and unlikely to take risks.
As for the Giants, I’m horrified to state that I’m not optimistic. For the first time in my 46 years as a season ticket holder (shelling out over $200,000 during that period), I’m contemplating giving them up. Or maybe I’ll just play Madden 2025 on my Xbox One for the remainder of the season and not waste my team watching Big Blue suffer.
Fortunately for Intel, it is not at that point. The company controls its destiny, but time is not on its side, so its incoming CEO must show results quickly and tangibly.
Some software developers disagree with the open-source community on licensing and compliance issues, arguing that the community needs to redefine what constitutes free open-source code.
The term “open washing” has emerged, referring to what some industry experts claim is the practice of AI companies misusing the “open source” label. As the artificial intelligence rush intensifies, efforts to redefine terms for AI processes have only added to the confusion.
Recent accusations that Meta “open washed” the description of its Llama AI model as true open source fueled the latest volley in the technical confrontation. Some in the industry, like Ann Schlemmer, CEO of open-source database firm Percona, have suggested that open-source licensing be replaced with a “fair source” designation.
Schlemmer, a strong advocate for adherence to open-source principles, expressed concern over the potential misuse of open-source terminology. She wants clear definitions and guardrails for AI’s inclusion in open source that align with understanding the core principles of open-source software.
“What does open-source software mean when it comes to AI models? [It refers to] the code is available, here’s the licensing, and here’s what you can do with it. Then we are piling on AI,” she told LinuxInsider.
The use of AI data is being mixed in as if it were software, which is where the confusion within the industry originates.
“Well, the data is not the software. Data is data. There are already privacy laws to regulate that use,” she added.
The Open Source Initiative (OSI) released an updated definition for open-source AI systems on Oct. 28, encouraging organizations to do more instead of slapping the “open source” term on AI work. OSI is a California-based public benefit corporation that promotes open source worldwide.
In a published interview elsewhere, OSI’s Executive Director Stefano Maffulli said that Meta’s labeling of the Llama foundation model as open source confuses users and pollutes the open-source concept. This action occurs as governments and agencies, including the European Union, increasingly support open-source software.
In response, OSI issued the first version of Open Source AI Definition 1.0 (OSAID) to define what qualifies as open-source software more explicitly. The document follows a year-long global community design process. It offers a standard for community-led, open, and public evaluations to validate whether an AI system can be deemed open-source AI.
“The co-design process that led to version 1.0 of the Open Source AI Definition was well-developed, thorough, inclusive, and fair,” said Carlo Piana, OSI board chair, in the press release.
The new definition requires open source models to provide enough information to enable a skilled person to use training data to recreate a substantially equivalent system using the same or similar data, noted Ayah Bdeir, lead for AI strategy at Mozilla, in the OSI announcement.
“[It] goes further than what many proprietary or ostensibly open source models do today,” she said. “This is the starting point to addressing the complexities of how AI training data should be treated, acknowledging the challenges of sharing full datasets while working to make open datasets a more commonplace part of the AI ecosystem.”
The text of the OSAID v.1.0 and a partial list of the global stakeholders endorsing the definition are available on the OSI website.
Schlemmer, who did not participate in writing OSI’s open-source definition, said she and others have concerns about the OSI content. OSAID does not resolve all the issues, she contended, and some content needs to be backtracked.
“Clearly, this is not said and done right, even by their own admitting. That reception has been overwhelming, but not in the positive sense,” Schlemmer added.
She compared the growing practice of loosely referring to something as an open-source product to what occurs in other industries. For example, the food industry uses the words “organic” or “natural” to suggest an assumption of a product’s contents or benefit to consumers.
“How much [of labeling a software product open source] is a marketing ploy?” she questioned.
Open-source supporters often boast about how the technology is deployed globally. Only rarely is an issue cited about license enforcement issues.
Schlemmer admitted that economic pressures drive changes in open-source licenses. It often becomes a balancing act between sharing free open-source code and monetizing software development.
For example, companies like MongoDB, her own Percona, and Elastic have adapted their licensing strategies to balance commercial interests with open-source principles. In these cases, license violations or enforcement were not involved.
“Several tools exist in the ecosystem, and compliance groups in corporate departments help people be compliant. Particularly in the larger organizations, there are frameworks,” said Schlemmer.
Individual developers may not recognize all those nuances. However, many license changes are based on determining the economic value of the project’s original owner.
Schlemmer is optimistic about the future of open source. Developers can build upon open-source code without violating licenses. However, changes in licensing can limit their ability to monetize.
These concerns highlight the potential erosion of open-source adoption due to license changes and the need for ongoing vigilance. She cautioned that it will take continuous evolution of open-source licensing and adaptation to new technologies and market pressures to resolve lingering issues.
“We must keep going back to the core tenet of open-source software and be very clear as to what that means and doesn’t mean,” Schlemmer recommended. “What problem are we trying to solve as technology evolves?”
Some of those challenges have already been addressed, she added. We have a framework for the open-source definition with clear labels and licenses.
“So, what’s this new concept? Why does what we already have no longer apply when we reference back?”
That is what needs to be aligned.
Google announced a major step forward in the development of a commercial quantum computer on Tuesday, releasing test results for its Willow quantum chip.
Those results show that the more qubits Google used in Willow, the more errors it reduced and the more quantum the system became.
“Google’s achievement in quantum error correction is a significant milestone toward practical quantum computing,” said Florian Neukart, chief product officer at Terra Quantum, a developer of quantum algorithms, computing solutions, and security applications, in Saint Gallen, Switzerland.
“It addresses one of the largest hurdles — maintaining coherence and reducing errors during computation,” he told TechNewsWorld.
Qubits, the basic information unit in quantum computing, are extremely sensitive to their environment. Any disturbances around them can cause them to lose their quantum properties, which is called decoherence. Maintaining qubit stability — or coherence — long enough to perform useful computations has been a significant challenge for developers.
Decoherence also makes quantum computers error-prone, which is why Google’s announcement is so important. Effective error correction is essential to the development of a practical quantum computer.
“Willow marks an important milestone on the journey toward fault-tolerant quantum computing,” said Rebecca Krauthamer, CEO of QuSecure, a maker of quantum-safe security solutions in San Mateo, Calif.
“It’s a step closer to making quantum systems commercially viable,” she told TechNewsWorld.
In a company blog, Google Vice President of Engineering Hartmut Neven explained that researchers tested ever-larger arrays of physical qubits, scaling up from a grid of 3×3 encoded qubits, to a grid of 5×5, to a grid of 7×7. With each advance, they cut the error rate in half. “In other words, we achieved an exponential reduction in the error rate,” he wrote.
“This historic accomplishment is known in the field as ‘below threshold’ — being able to drive errors down while scaling up the number of qubits,” he continued.
“The machines are very sensitive, and noise builds up both from any external influence as well as from use itself,” said Simon Fried, vice president for business development and marketing at Classiq, a developer of software for quantum computers, in Tel Aviv, Israel.
“Being able to minimize noise or compensate for it makes it possible to run longer, more complex programs,” he told TechNewsWorld.
“This is significant progress in terms of chip tech because of the inherent stability of the hardware as well as its ability to control noise,” he added.
Neven also noted that as the first system below the threshold, this is the most convincing prototype for a scalable logical qubit built to date. “It’s a strong sign that useful, very large quantum computers can indeed be built,” he wrote. “Willow brings us closer to running practical, commercially-relevant algorithms that can’t be replicated on conventional computers.”
Google also released data on Willow’s performance based on a common quantum computer test known as the random circuit sampling (RCS) benchmark. “[I]t checks whether a quantum computer is doing something that couldn’t be done on a classical computer,” Neven explained. “Any team building a quantum computer should check first if it can beat classical computers on RCS; otherwise, there is strong reason for skepticism that it can tackle more complex quantum tasks.”
Neven called Willow’s performance on the RCS benchmark “astonishing.” It performed a computation in under five minutes that would take one of today’s fastest supercomputers 10 septillion years — that’s 10 followed by 25 zeroes.
“This mind-boggling number exceeds known timescales in physics and vastly exceeds the age of the universe,” he wrote. “It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse.”
Chris Hickman, chief security officer at Keyfactor, a digital identity management company in Cleveland, hailed Willow as “a significant milestone in quantum computing” but cautioned that Willow’s advanced quantum error correction brings the field closer to practical quantum applications, signaling that businesses need to prioritize preparation for the inevitable disruption of quantum computing in areas like encryption and security.
“While this development doesn’t immediately alter the expected timeline for quantum computers to break current encryption standards, it reinforces the idea that progress towards this milestone is accelerating,” he told TechNewsWorld.
“Practical use cases for quantum computers go beyond applications that stand to benefit businesses,” he said. “Bad actors will inevitably leverage the technology for their own nefarious benefit.”
“Hackers will leverage quantum computers to decrypt sensitive information, rendering legacy cryptographic methods obsolete,” he continued. “These include algorithms like RSA and ECC, which are currently considered unbreakable.”
Karl Holmqvist, founder and CEO of Lastwall, a provider of identity-centric and quantum-resilient technologies, in Mountain View, Calif., agreed that the rate of development of cryptographically relevant quantum computers is accelerating. “But I also understand that there are skeptics who think development is not as close as it seems or that it may never arrive,” he told TechNewsWorld.
“So, my question to everyone is this: Given that we will either deploy quantum-resilient solutions too early or too late, which scenario carries more risk?” he asked. “Would you rather understand the implications of post-quantum cryptographic deployments, test them in your environments, and be prepared to rapidly deploy when needed — or risk losing your secrets?”
In his blog, Neven also revealed why he changed his focus from artificial intelligence to quantum computing. “My answer is that both will prove to be the most transformational technologies of our time, but advanced AI will significantly benefit from access to quantum computing,” he wrote.
Quantum computing is inherently designed to tackle complex problems, so it could be very helpful with the development of AI, noted Edward Tian, CEO of GPTZero, maker of an AI detection platform in Arlington, Va. “However, we have seen instances of classical AI still being the better method,” he told TechNewsWorld.
“I came out of AI and entered the quantum computing world specifically because of the promise quantum computing has to unlock doors that remain shut in a classical computing world,” added QuSecure’s Krauthamer.
However, she had a word of caution about the technology. “A quantum computer is not simply a bigger, faster, more powerful computer,” she said. “It thinks in a fundamentally different way and, therefore, will solve different types of problems than we can today. It is wise to be skeptical if quantum computing is presented as a cure-all for challenging computation tasks.”
A sophisticated mobile phishing campaign targeting job seekers intended to install dangerous malicious software on their phones was revealed Tuesday by security researchers.
The campaign discovered by Zimperium zLabs targets Android mobile phones and aims to distribute a variant of the Antidot banking trojan that the researchers have dubbed AppLite Banker.
“The AppLite banking trojan’s ability to steal credentials from critical applications like banking and cryptocurrency makes this scam highly dangerous,” said Jason Soroko, a senior fellow at Sectigo, a certificate lifecycle management provider in Scottsdale, Ariz.
“As mobile phishing continues to rise, it’s crucial for individuals to remain vigilant about unsolicited job offers and always verify the legitimacy of links before clicking,” he told TechNewsWorld.
“The AppLite banking trojan requires permissions through the phone’s accessibility features,” added James McQuiggan, a security awareness advocate at KnowBe4, a security awareness training provider in Clearwater, Fla.
“If the user is unaware,” he told TechNewsWorld, “they can allow full control over their device, making personal data, GPS location, and other information available for the cybercriminals.”
In a blog on Zimperium’s website, researcher Vishnu Pratapagiri explained that attackers present themselves as recruiters, luring unsuspecting victims with job offers. As part of their fraudulent hiring process, he continued, the phishing campaign tricks victims into downloading a malicious application that acts as a dropper, eventually installing AppLite.
“The attackers behind this phishing campaign demonstrated a remarkable level of adaptability, leveraging diverse and sophisticated social engineering strategies to target their victims,” Pratapagiri wrote.
A key tactic employed by the attackers involves masquerading as a job recruiter or HR representatives from well-known organizations, he continued. Victims are enticed to respond to fraudulent emails, carefully crafted to resemble authentic job offers or requests for additional information.
“People are desperate to get a job, so when they see remote work, good pay, good benefits, they text back,” noted Steve Levy, principal talent advisor with DHI Group, parent company of Dice, a career marketplace for candidates seeking technology-focused roles and employers looking to hire tech talent globally, in Centennial, Colo.
“That starts the snowball rolling,” he told TechNewsWorld. “It’s called pig butchering. Farmers will fatten a pig little by little, so when it’s time to cook it, they’re really big and juicy.”
After the initial communication, Pratapagiri explained that the threat actors direct victims to download a purported CRM Android application. While appearing legitimate, this application functions as a malicious dropper, facilitating the deployment of the primary payload onto the victim’s device.
Stephen Kowski, field CTO at SlashNext, a computer and network security company in Pleasanton, Calif., noted that the AppLite campaign represents a sophisticated evolution of techniques first seen in Operation Dream Job, a global campaign run in 2023 by the infamous North Korean Lazarus group.
While the original Operation Dream Job used LinkedIn messages and malicious attachments to target job seekers in the defense and aerospace sectors, today’s attacks have expanded to exploit mobile vulnerabilities through fraudulent job application pages and banking trojans, he explained.
“The dramatic shift to mobile-first attacks is evidenced by the fact that 82% of phishing sites now specifically target mobile devices, with 76% using HTTPS to appear legitimate,” he told TechNewsWorld.
“The threat actors have refined their social engineering tactics, moving beyond simple document-based malware to deploy sophisticated mobile banking trojans that can steal credentials and compromise personal data, demonstrating how these campaigns continue to evolve and adapt to exploit new attack surfaces,” Kowski explained.
“Our internal data shows that users are four times more likely to click on malicious emails when using mobile devices compared to desktops,” added Mika Aalto, co-founder and CEO of Hoxhunt, a provider of enterprise security awareness solutions in Helsinki.
“What’s even more concerning is that mobile users tend to click on these malicious emails at an even larger rate during the late night hours or very early in the morning, which suggests that people are more vulnerable to attacks on mobile when their defenses are down,” he told TechNewsWorld. “Attackers are clearly aware of this and are continually evolving their tactics to exploit these vulnerabilities.”
This new wave of cyber scams underscores the evolving tactics used by cybercriminals to exploit job seekers who are motivated to make a prospective employer happy, observed Soroko.
“By capitalizing on individuals’ trust in legitimate-looking job offers, attackers can infect mobile devices with sophisticated malware that targets financial data,” he said. “The use of Android devices, in particular, highlights the growing trend of mobile-specific phishing campaigns.”
“Be careful what you sideload on an Android device,” he cautioned.
DHI’s Levy noted that attacks on job seekers aren’t limited to mobile phones. “I don’t think this is simply relegated to mobile phones,” he said. “We’re seeing this on all the social platforms. We’re seeing this on LinkedIn, Facebook, TikTok, and Instagram.”
“Not only are these scams common, they’re very insidious,” he declared. “They prey on the emotional situation of job seekers.”
“I probably get three to four of these text inquiries a week,” he continued. “They all go into my junk folder automatically. These are the new versions of the Nigerian prince emails that ask you to send them $1,000, and they’ll give you $10 million back.”
Beyond its ability to mimic enterprise companies, AppLite can also masquerade as Chrome and TikTok apps, demonstrating a wide range of target vectors, including full device takeover and application access.
“The level of access provided [to] the attackers could also include corporate credentials, application, and data if the device was used by the user for remote work or access for their existing employer,” Pratapagiri wrote.
“As mobile devices have become essential to business operations, securing them is crucial, especially to protect against the large variety of different types of phishing attacks, including these sophisticated mobile-targeted phishing attempts,” said Patrick Tiquet, vice president for security and architecture of Keeper Security, a password management and online storage company, in Chicago.
“Organizations should implement robust mobile device management policies, ensuring that both corporate-issued and BYOD devices comply with security standards,” he told TechNewsWorld. “Regular updates to both devices and security software will ensure that vulnerabilities are promptly patched, safeguarding against known threats that target mobile users.”
Aalto also recommended the adoption of human risk management (HRM) platforms to tackle the growing sophistication of mobile phishing attacks.
“When a new attack is reported by an employee, the HRM platform learns to automatically find future similar attacks,” he said. “By integrating HRM, organizations can create a more resilient security culture where users become active defenders against mobile phishing and smishing attacks.”
Well, it’s that time of year again. Like you, I’m working through the gift-giving part of the holidays. I used to send out Amazon gift cards until one was stolen. I didn’t get sympathy or a refund from Amazon, so I’ve decided to steer clear of store gift cards altogether. So, I’m back to sending people things I think they might want, and this week, I’ll share my list of tech gadgets that make great gifts.
I’ll close with my last Product of the Week for 2024, a new off-road electric motorcycle from Dust Moto due out next year that is being built almost in my backyard.
Do you have a special lady in your life? Wife, sister, mother, daughter or girlfriend? Well, the best, but far from the cheapest, hair dryer I’ve found is the Dyson Supersonic. It’s normally priced around $430, but I found it on sale for $329. I got one for my wife, and she loved it, so I got one for my sister this year, but don’t tell her!
This hair dryer is relatively small, which makes it easy to pack. It comes up to speed quickly and holds up well. We haven’t had any problems with ours so far.
The Supersonic uses Dyson’s unique design, which pulls the air from low in the handle, so it is unlikely to suck up your hair, and it has adjustments for every hair type, making it extremely flexible. It does come with a host of attachments, as is typical for anything Dyson-made. Don’t ask me to explain all the features because I don’t use them myself.
This is my gift idea for someone you care about because it is well-made and expensive. It’s unlikely they’d buy it for themselves, and they will likely think fondly of you when they dry their hair.
For the older folks in your life — parents, grandparents, uncles, or aunts — who might not be great with technology but who love family pictures, consider a digital picture frame you can preload with family pictures. When connected to the web, like the Nixplay digital touchscreen picture frame, you can update the picture playlist from the comfort of your couch once the device is set up.
Yes, you’ll have to make sure the recipient’s Wi-Fi is working, which is a good reason to visit rather than just sending the thing out. At 10.1 inches and for around $140 (the 15-inch is on sale for $244 right now), it won’t be too big for a bedside table, and it works with both iOS and Android phones.
Although they say a grandma could upload pictures to the frame, it’s better if you manage it yourself. Regularly updating the pictures turns the frame into an ongoing surprise and a thoughtful reminder that you’re thinking of them. Parents often struggle to feel connected after their kids move out, especially when those kids have their own families. This gift can help bridge that gap, serving as a simple but meaningful way to show you care.
I was in a terrible car accident last year where an airbag knocked me out and broke my back. Because I was hit so hard, I have little memory of the accident, but my 70mai Dash Cam Omni caught the entire event, so I was able to go back and learn from it.
What makes this dash cam different is that it has a head like R2D2 that swivels, so you don’t have to have a rear-view camera. It will follow the action as it moves from side to side, capturing the entire event. It will also record people who mess with your car while it is parked. While it isn’t as good as what Teslas typically ship with, for those of us who don’t drive a Tesla, this is a great way to record your drive.
It can also get you out of bogus tickets. I got one of those a few years back where the officer said I was doing 90 on an open road. I wasn’t, and I was in heavy traffic. It was my word against his, and he won. If I’d had the camera back then, it would have paid for itself with that one ticket.
This dash cam can also reinforce the need to drive more safely, given the record you are creating can be used against you. If you have kids who borrow your car, you might want to consider this to keep them from doing stupid things. At $200, it’s one of the more reasonable dash cams out there.
Oh, and there are a lot of insurance scammers out there who will back into your car and scream that you rear-ended them or throw themselves in front of your vehicle, claiming you didn’t yield. This camera can help you get out of those situations as well.
Living in Bend, Oregon, where winters can be harsh, I’ve come to appreciate heated gear. The VolteX Heated Scarf is one of the easiest self-heated items in the market.
At under $17, this is a handy gift for those on a budget. You have to charge it up, but it heats almost instantly once turned on and has three heat settings (just like my heated car seats).
The scarf is fluffy, so it feels really good on your neck. I bought one for my wife this year because she hates the cold. She can leave it in the car plugged into the car’s electric system, so it’s always ready for the cold when a little instant warmth is appreciated.
It is very portable, so you can travel with it, though since it has a battery, you’ll need to carry it onboard and not leave it in your luggage. It comes in brown, black, gray, or white. I ended up getting two in different colors since I’m going to want to wear one as well.
Things are getting a bit crazy, given all of the political tension in the air. I’ve been in several situations where I wanted to record what was happening without experiencing the anger that usually comes when you use your smartphone to capture someone breaking the law or behaving badly. In addition, using an action camera to record videos often doesn’t provide the same view you have because the camera is located somewhere other than your face.
Ray-Ban Meta smart glasses have a great camera in them. They can also replace your earbuds if you like to walk and listen to music or audiobooks or have your messages read to you while you are doing something else like running. The glasses come in a variety of styles and will take prescription lenses.
The camera is well hidden, so you don’t have the concerns that the old Google Glass headset had. While the base prices currently range from $299 to $379, these glasses could be just what your loved one needs to capture that fleeting moment when something truly good or terribly bad is happening and you want to create a permanent record.
I hope this list helps you with your holiday shopping choices. Putting it together helped me make some of my own this year. I hope you have a marvelous holiday season!
While I don’t ride much anymore, I used to own a Suzuki 125 and a Yamaha 175. The 125 was my school motorcycle, and the 175 was offroad. The problem with gas bikes when you are riding in the wilderness is that they make a lot of noise, and they need RPMs for torque. This means you miss a lot of the natural sounds, and when you get into trouble, like on a steep hill, you can end up spinning your wheel while trying to build enough torque to climb it.
At an estimated $10,950 and with a release date of late 2025, the Dust Moto’s Hightail all-electric dirt bike is simply awesome. While not street legal, at least not yet, it is a monster of an alternative when it comes to riding off-road. It has an extremely clean design, and it was created by guys who also enjoy riding outdoors — we have a lot of that here in Oregon. Their experience shows in the design and execution of this bike.
With 42 HP and 660 Nm of torque, it will run for up to two hours on a charge. Plus, it has a replaceable battery, so you could carry a spare. It was created with support from Bloom, a company specializing in electric motorcycles, so even though this is Dust Moto’s first bike, it isn’t Bloom’s first, so you can be confident this bike will do all it claims and more.
Designed within walking distance of me, this would be on my Christmas shortlist for 2025 if I were still riding. As a result, the Hightail all-electric dirt bike by Dust Moto is my last Product of the Week for 2024, even though it won’t be on my Christmas list until 2025, when this bike will be available.
AI-driven systems have become prime targets for sophisticated cyberattacks, exposing critical vulnerabilities across industries. As organizations increasingly embed AI and machine learning (ML) into their operations, the stakes for securing these systems have never been higher. From data poisoning to adversarial attacks that can mislead AI decision-making, the challenge extends across the entire AI/ML lifecycle.
In response to these threats, a new discipline, machine learning security operations (MLSecOps), has emerged to provide a foundation for robust AI security. Let’s explore five foundational categories within MLSecOps.
AI systems rely on a vast ecosystem of commercial and open-source tools, data, and ML components, often sourced from multiple vendors and developers. If not properly secured, each element within the AI software supply chain, whether it’s datasets, pre-trained models, or development tools, can be exploited by malicious actors.
The SolarWinds hack, which compromised multiple government and corporate networks, is a well-known example. Attackers infiltrated the software supply chain, embedding malicious code into widely used IT management software. Similarly, in the AI/ML context, an attacker could inject corrupted data or tampered components into the supply chain, potentially compromising the entire model or system.
To mitigate these risks, MLSecOps emphasizes thorough vetting and continuous monitoring of the AI supply chain. This approach includes verifying the origin and integrity of ML assets, especially third-party components, and implementing security controls at every phase of the AI lifecycle to ensure no vulnerabilities are introduced into the environment.
In the world of AI/ML, models are often shared and reused across different teams and organizations, making model provenance — how an ML model was developed, the data it used, and how it evolved — a key concern. Understanding model provenance helps track changes to the model, identify potential security risks, monitor access, and ensure that the model performs as expected.
Open-source models from platforms like Hugging Face or Model Garden are widely used due to their accessibility and collaborative benefits. However, open-source models also introduce risks, as they may contain vulnerabilities that bad actors can exploit once they are introduced to a user’s ML environment.
MLSecOps best practices call for maintaining a detailed history of each model’s origin and lineage, including an AI-Bill of Materials, or AI-BOM, to safeguard against these risks.
By implementing tools and practices for tracking model provenance, organizations can better understand their models’ integrity and performance and guard against malicious manipulation or unauthorized changes, including but not limited to insider threats.
Strong GRC measures are essential for ensuring responsible and ethical AI development and use. GRC frameworks provide oversight and accountability, guiding the development of fair, transparent, and accountable AI-powered technologies.
The AI-BOM is a key artifact for GRC. It is essentially a comprehensive inventory of an AI system’s components, including ML pipeline details, model and data dependencies, license risks, training data and its origins, and known or unknown vulnerabilities. This level of insight is crucial because one cannot secure what one does not know exists.
An AI-BOM provides the visibility needed to safeguard AI systems from supply chain vulnerabilities, model exploitation, and more. This MLSecOps-supported approach offers several key advantages, like enhanced visibility, proactive risk mitigation, regulatory compliance, and improved security operations.
In addition to maintaining transparency through AI-BOMs, MLSecOps best practices should include regular audits to evaluate the fairness and bias of models used in high-risk decision-making systems. This proactive approach helps organizations comply with evolving regulatory requirements and build public trust in their AI technologies.
AI’s growing influence on decision-making processes makes trustworthiness a key consideration in the development of machine learning systems. In the context of MLSecOps, trusted AI represents a critical category focused on ensuring the integrity, security, and ethical considerations of AI/ML throughout its lifecycle.
Trusted AI emphasizes the importance of transparency and explainability in AI/ML, aiming to create systems that are understandable to users and stakeholders. By prioritizing fairness and striving to mitigate bias, trusted AI complements broader practices within the MLSecOps framework.
The concept of trusted AI also supports the MLSecOps framework by advocating for continuous monitoring of AI systems. Ongoing assessments are necessary to maintain fairness, accuracy, and vigilance against security threats, ensuring that models remain resilient. Together, these priorities foster a trustworthy, equitable, and secure AI environment.
Within the MLSecOps framework, adversarial machine learning (AdvML) is a crucial category for those building ML models. It focuses on identifying and mitigating risks associated with adversarial attacks.
These attacks manipulate input data to deceive models, potentially leading to incorrect predictions or unexpected behavior that can compromise the effectiveness of AI applications. For example, subtle changes to an image fed into a facial recognition system could cause the model to misidentify the individual.
By incorporating AdvML strategies during the development process, builders can enhance their security measures to protect against these vulnerabilities, ensuring their models remain resilient and accurate under various conditions.
AdvML emphasizes the need for continuous monitoring and evaluation of AI systems throughout their lifecycle. Developers should implement regular assessments, including adversarial training and stress testing, to identify potential weaknesses in their models before they can be exploited.
By prioritizing AdvML practices, ML practitioners can proactively safeguard their technologies and reduce the risk of operational failures.
AdvML, alongside the other categories, demonstrates the critical role of MLSecOps in addressing AI security challenges. Together, these five categories highlight the importance of leveraging MLSecOps as a comprehensive framework to protect AI/ML systems against emerging and existing threats. By embedding security into every phase of the AI/ML lifecycle, organizations can ensure that their models are high-performing, secure, and resilient.
As a longtime user of the original Sonos Arc, I approached the new Sonos Arc Ultra with excitement and skepticism.
The original Arc has been a staple in my home entertainment setup. It delivers impressive Dolby Atmos sound and effortlessly integrates with the Sonos ecosystem.
With the Arc Ultra promising upgrades in sound quality, design, and connectivity, I was eager to see if it could live up to the hype and justify its higher price tag.
After spending time with the Ultra, it’s clear that Sonos hasn’t just refined its flagship soundbar; they’ve reimagined what a standalone audio system can offer. But is it enough to tempt existing Arc users like me to take the leap? Let’s dive in.
The Sonos Arc Ultra, released on Oct. 29, is Sonos’ latest flagship soundbar. It is priced at $999 and available in black or white.
This new release marks a slight price increase over its predecessor, the original Arc, which is discontinued and now being sold at discounted rates as retailers clear remaining stock.
The Arc Ultra enters a competitive market, facing rivals like the Sony Bravia Theatre Bar 9 and the Samsung HW-Q990D. Both offer compelling features and, at times, significant discounts.
Visually, the Arc Ultra closely resembles the original Arc, maintaining Sonos’ minimalist aesthetic with a perforated grille encompassing most of the chassis. However, subtle changes include a ledge at the back housing touch controls — play/pause, skip, volume slider, and a voice control button — relocated from the main grille.
The soundbar’s dimensions have been adjusted ever so slightly: it’s wider at 118cm (up from 114cm) but shorter in height at 7.5cm (down from 8.7cm), reducing the likelihood of obstructing the TV screen when placed in front. Weighing approximately 350g less than its predecessor, the Arc Ultra is also more wall-mount friendly.
The design requires an open placement, as positioning it in a nook or under a shelf can impede the upward-firing drivers essential for optimal sound dispersion.
The Arc Ultra boasts a 9.1.4-channel configuration, a significant upgrade from the original Arc’s 5.0.2 setup. It incorporates 14 custom-engineered drivers powered by 15 Class D amplifiers, including seven tweeters, six midrange woofers, and a novel Sound Motion woofer.
This innovative woofer utilizes four smaller, lightweight motors to move the cone, enabling greater air displacement and, according to Sonos, delivering up to twice the bass of the original Arc. The dual-cone design also aims to minimize mechanical vibrations, contributing to a more balanced sound profile.
Despite these advancements, the Arc Ultra lacks support for DTS audio formats, focusing solely on Dolby Atmos for spatial audio. Connectivity options remain limited, with a single HDMI eARC port and no dedicated HDMI inputs, necessitating all external sources being connected through the TV. This setup may pose challenges for users with multiple high-spec gaming devices and limited HDMI 2.1 ports on their TVs.
On the upside, the Arc Ultra introduces Bluetooth connectivity (a first for Sonos soundbars) and expands Sonos’ excellent Trueplay calibration support to Android devices, enhancing user accessibility.
In terms of audio performance, the Arc Ultra delivers a clean, precise, and spacious soundstage with impressive three-dimensionality. The enhanced bass is deep and expressive, providing a solid foundation without overwhelming the overall sound profile.
Dialogue clarity has improved, thanks to the new front-firing speaker array dedicated to the center channel, ensuring crisp and intelligible speech reproduction. The soundbar excels in detail retrieval, capturing subtle nuances across various content types.
However, the absence of HDMI passthrough and DTS support may be limiting for some users. Additionally, while the Sonos app offers robust control and customization options, some users have reported occasional issues that could affect the overall user experience.
Compared to competitors like the Sony Bravia Theatre Bar 9 and the Samsung HW-Q990D, the Arc Ultra holds its ground in terms of sound quality and design.
Though officially priced higher, the Sony Bravia Theatre Bar 9 often sees discounts that bring it closer to the Arc Ultra’s price point. The Bravia Theatre Bar 9 boasts a comprehensive feature set, including HDMI passthrough and support for both Dolby Atmos and DTS:X formats, offering greater flexibility for users with diverse content sources.
While more expensive, the Samsung HW-Q990D includes a wireless subwoofer and surround speakers, delivering a more immersive surround sound experience out of the box. Its connectivity options are more extensive, featuring multiple HDMI inputs and support for various audio formats, making it a versatile choice for users seeking a comprehensive home theater setup.
To be sure, Sonos has long faced criticism for its app, which, while offering a sleek design and robust control options, has been plagued by occasional connectivity issues and limited flexibility.
Users often report frustrations with delayed updates, difficulty adding new devices, and problems syncing across the ecosystem. These woes are especially frustrating given the premium price of Sonos products, which sets high expectations for seamless integration.
Although recent updates have aimed to address some of these issues, the app experience still leaves room for improvement, particularly as competitors continue to refine their platforms. While I have suffered through some of these issues myself (particularly with Sonos’ terrific over-the-ears Ace headphones), the app has thankfully matured to the point that it didn’t inhibit setup.
Still, from a pure hardware standpoint, the Sonos Arc Ultra represents a significant advancement over its predecessor, offering enhanced bass performance, improved dialogue clarity, and a more immersive soundstage.
Its sleek design and expanded connectivity options, including Bluetooth and broader Trueplay support, make it a compelling choice for users seeking a high-quality, all-in-one soundbar solution. However, the lack of HDMI passthrough and DTS support may be a consideration for potential buyers.
Overall, the Arc Ultra is a superb soundbar that elevates the home audio experience, making it a worthy contender in the premium soundbar market.
The Arc Ultra images featured in this article are credited to Sonos.
A near-production model of a solar-powered car will be on display at CES 2025.
Aptera Motors has announced that a “production intent” version of its eponymously named solar-powered vehicle will be displayed at the mammoth consumer electronics show, which will be held January 7-10 in Las Vegas.
The Aptera offers up to 40 miles of solar-powered driving per day, a three-wheel futuristic design, unparalleled energy efficiency, and the option to plug in for 400 miles on a single charge in under an hour, according to the company.
“Announcement of a production-intent model means Aptera has a vehicle that should comply with regulatory requirements and should be at a level where the design is viable for manufacturing, hitting performance, safety, and manufacturing requirements,” Rob Enderle, president and principal analyst at the Enderle Group, an advisory services firm, in Bend, Ore., told TechNewsWorld.
“Aptera is being watched with a great deal of interest by many in the automotive industry,” added Edward Sanchez, a senior analyst in the automotive practice at TechInsights, a global technology intelligence company.
“It is a radical departure from most mainstream cars,” he told TechNewsWorld. “There’s a big question of demand and mainstream appeal for such an unconventional design.”
“The company is also using some manufacturing techniques that, up to this point, have been mainly used in the supercar and motorsports industries,” he continued.
“The company is targeting a competitive price point for its vehicle, so it will be interesting to see how these specialized techniques and materials will scale for what’s intended to be a quasi-mass-market vehicle — from a volume standpoint — and if the company can maintain competitive operating margins over the longer term.”
Mark N. Vena, president and principal analyst at SmartTech Research, a consulting and research firm in Las Vegas, maintained that the scrutiny around Aptera’s demo will be incredibly high. “The introduction of a production-intent vehicle signals that Aptera is transitioning from the prototype phase to a model ready for mass manufacturing, a critical milestone in its development,” he told TechNewsWorld.
“This step demonstrates the company’s confidence in the design, functionality, and manufacturability of the vehicle, aligning with industry standards and regulatory requirements,” he continued. “It also helps build consumer and investor trust by showcasing a tangible product that is nearing market readiness, setting the stage for final testing, production scaling, and eventual delivery. I’m not optimistic.”
However, that dearth of optimism doesn’t seem to be shared by the early adopters who have forked over US$1.7 billion to pre-order 50,000 units of the vehicle.
“CES is the perfect stage for unveiling the future of sustainable transportation,” Aptera Co-CEO Chris Anthony said in a statement.
“Our production-intent vehicle is not only a testament to years of innovation and engineering but also a tangible solution to reducing carbon emissions and redefining how we think about energy-efficient mobility. We’re excited to show the world that Aptera is ready to hit the road and deliver a cleaner, more sustainable future.”
To secure that future, though, will require surmounting some significant challenges. “Generally, there just isn’t enough surface area on a car for the current solar panel technology to do more than just run HVAC to keep the car cool,” Enderle explained. “Recharging the massive batteries in most EVs would take days to weeks to recharge using panels on a car.”
Ben Zientara, a solar policy and industry expert at SolarReviews, a reviews and advice website, asserted that there is no way to power the kind of vehicle people want to drive with solar cells embedded in a car’s surface.
“Even the most efficient solar cells can provide only a few miles of additional range per day, even if parked in the sunniest spot in the sunniest state,” he told TechNewsWorld. “The average electric vehicle can get about 3.5 miles of range with one kilowatt-hour of electricity. A car with solar cells can generate maybe three to four kWh per day, which is enough to drive 10 to 14 miles per day on just solar power.”
He pointed out that past attempts at solar-powered EVs by Sono Sion and Lightyear One both had solar cells that peaked at around 1.2 kilowatts of power under full sun. “This means that the car would need to be perfectly clean and parked in the ideal location on a very sunny day for several hours to get the maximum 14 miles per day range,” he said.
“I don’t see a huge opportunity unless we see meaningful progress with the technology and the cost that would enable them to compete with existing vehicles today, including electric vehicles and those with internal combustion engines,” added Seth Goldstein, an equity strategist and chair of the electric vehicle committee at Chicago-based Morningstar Research Services.
He explained that Aptera is targeting 40 miles for an all-solar range, after which the vehicle becomes an electric car with a battery. “I just don’t see consumers really being willing to pay extra for 40 miles of solar-powered driving.”
Even if the surface-to-power problem is addressed, there are other vagaries of solar power. “Cars are subject to weather conditions, falling leaves, bird poop, and other debris that cause loss of power output from solar cells,” Zientara noted.
He added that it is nearly impossible to perfectly orient the solar cells in a car’s surface to the sun. “To get the most from solar cells, they must be angled exactly perpendicular to incoming sunlight,” he explained. “A car has many, many different surfaces, all of which are angled in different directions. If you maximize the orientation of one surface, the others are not pointed directly at the sun.”
Then there’s the sun moving in the sky problem. “Even if you can point one or more of a vehicle’s faces directly at the sun, they won’t stay that way for very long,” Zientara noted. “And the sun also changes throughout the year, shining down more directly on the northern hemisphere in the summer and much less directly in the winter. So solar cells in a car’s surface will generate more energy during the summer and less during the winter, regardless of weather.”
Solar-powered vehicles may find a place in niche markets. “Solar-powered vehicles are potentially sufficient for use cases where extended travel in sunny regions can maximize energy generation, such as rural or remote areas with limited access to charging infrastructure,” Vena said. “They are well-suited for low-speed, short-distance transportation, like delivery services, campus shuttles, or recreational vehicles, where energy demand is lower.”
“Solar-powered vehicles can also serve as backup power sources or sustainable alternatives for off-grid living, contributing to energy independence and reduced carbon footprints,” he added.
“They won’t be viable for most people,” Enderle acknowledged, “but for those that can, or need to use them for living off the grid or because they have no viable charging alternatives, they could be a godsend.”
Schools and organizations that deploy vast numbers of computers have a much-needed computing edge against cybersecurity risks with enterprise-grade Chromebooks.
Consumer-grade Chromebooks come with what Google calls “defense in depth,” which provides multiple layers of protection. If attackers succeed in bypassing one layer, others remain in effect. The networked Chromebooks deployed in school systems, medical facilities, and government offices take multi-layer security and boost it with additional features. One of them is Zero Trust security, a framework that verifies every user and device.
All Chromebook devices run ChromeOS, an embedded operating system built around Google’s Chrome web browser. They run the same Google-certified operating image system. This built-in heightened security and automatic updates are designed for Zero Trust security and require no monitoring by users.
Endpoint resilience and data protection are two critical components of Zero Trust, augmented by robust data loss prevention (DLP) and granular access controls. Running enterprise-level Chromebooks on an organization’s network is easily maintained by the IT system administrator through a console inaccessible to users.
The approach works whether students or employees use the Chromebook devices internally or remotely, ensuring that security shields are always engaged. For example, users can access their devices using QR codes and picture-based login options.
“Schools have become frequent targets for cyberattacks like ransomware, phishing, and malware,” said Jeremy Burnett, vice president of technology at CTL, during a recent seminar where his company presented on the updated security features built into both consumer and enterprise Chromebooks.
CTL is a Chromebook manufacturer and ChromeOS OEM service provider that partners with Google to deliver tailored solutions for educators, learners, and businesses. These solutions address the growing threats of cyberattacks faced by schools and organizations.
According to Andrew Luong, partner success engineer for Google and ChromeOS, the goal is to have strong authentication with second factors or security keys. Despite the other login options, students and others less familiar with technology prefer passwords.
“Making users change passwords frequently is complex because every app you use today asks for longer and more complex passwords. It’s become quite a hassle,” he told the virtual seminar audience.
Google’s password manager has been super helpful in generating stronger passwords because the more you have to change them, the less likely you will remember them. Google’s various logon tools help users handle better passwords.
Another major challenge is device health, he added. Devices must be updated regularly with the latest security patches.
“Using ChromeOS is where we really shine,” noted Luong. “ChromeOS devices update automatically, a key benefit and differentiator, with all running the same Google-certified operating image.”
However, he added that school IT teams must ensure that these devices are connected to get those updates and remain on the version you approve in compliance with your district or your school.
Using the IT administration console makes it easy to keep them on a particular version of ChromeOS so that the students can take their tests or the teachers or staff can use their classroom tools.
“What we are doing in our console is having Google AI surface and show you, as you log into the Cloud Console, that devices are all up to date,” he said.
Updates are installed in the background on the second copy of the OS. The process does not interfere with any user’s work. When all the updates are downloaded, a reboot button appears to load the new OS version.
Chromebooks include Verified Boot, a trust connector technology that verifies the integrity of the operating system during startup and ensures the system has not been tampered with. If tampering or corruption is detected, the system automatically attempts to repair itself, often by restoring the OS to its original state. This ensures that the operating system remains secure and intact, addressing any failures in its integrity.
Enterprise Chromebooks now have context-aware signals to check the integrity of the running ChromeOS version before it allows the devices to connect to school applications. That is an innovation in the zero-trust architecture framework, explained Luong.
Another recent security feature added to the IT management console is threat detection and response, which does not use any agents. The management license enables admins to configure and monitor information flowing from ChromeOS device security events into the security event notification system.
“So centralized reporting and insights make it easy to have that zero-trust framework and enhance your cybersecurity,” he said. “ChromeOS has built-in malware protection. No ransomware has ever been reported [on ChromeOS devices].”
These enhanced enterprise cybersecurity features are available through the admin console under a licensed plan from an authorized provider like CTL to enterprise-grade devices. Consumer-grade Chromebooks all have the other features mentioned regarding automatic updates and built-in malware and antivirus protection.
Luong stressed an essential point about the rigorous cybersecurity protections inherent in all Chromebook devices. They cannot always survive careless employee actions.
“When it comes to phishing, about 90% of data breaches in K-12 schools result from a system employee who is clicking on a link — and that is not a knock on school system employees,” he said.
If that clicking results in a ransomware attack, the fault is not with Chromebooks. Education institutions are among the most targeted sectors.
That is where cybersecurity training comes into play. On average, U.S. schools and colleges lose about $500,000 a day to downtime during ransomware attacks. So, the stakes are high when something happens, Luong observed.
CyberNut offers security awareness training. The company’s platform is designed to be extremely gamified and engaging, based on micro training sessions with short, gamified experiences.
“The real objective is to allow schools to measure behavior change. Our success is not just based on checking a box for faculty staff after they watch a short video and take a quiz. We are laser-focused and deliver measurable behavior change through an ongoing, perpetual training experience,” said Oliver Page, co-founder and CEO of CyberNut.
He offers a free trial, allowing organizations to learn about cybersecurity training. That includes a free phishing assessment to see how a school district is positioned from a security posture perspective.
The quality of phishing emails has become more sophisticated over the past 10 or 20 years, with ransomware attacks on K-12 schools have increasing substantially in the last year. According to Page, most of those attacks come through malicious email and phishing.
“That is scary because depending on how you calculate that number. If you are talking about schools that were targeted in some way and something happened, it is closer to 100% of schools receiving malicious emails that could lead to a ransomware attack every day. So, it is prevalent,” said Page.
Several factors put schools in the crosshairs so prominently. Among the primary causes is a lack of budget, which leads to a lack of staffing and expertise.
“That gets bad when we couple it with thousands of devices to manage and secure. We have tons of extremely valuable data,” Page warned.
The median ransomware payment last year was $6.5 million. In addition to that ransom, you are looking at additional millions in recovery costs.
One of the realities of that is that nobody teaches students about cyber safety, he added. Parents spend an average of 46 minutes educating their children on cybersecurity in their entire lifetime.
“Couple that with the fact that the average child above the age of 13 spends seven hours a day online, it is easy to see where the disparity and the concern lies,” he concluded.
High-powered computer chip maker Nvidia on Monday unveiled a new AI model developed by its researchers that can generate or transform any mix of music, voices and sounds described with prompts using any combination of text and audio files.
The new AI model called Fugatto — for Foundational Generative Audio Transformer Opus — can create a music snippet based on a text prompt, remove or add instruments from an existing song, change the accent or emotion in a voice, and even produce sounds never heard before.
According to Nvidia, by supporting numerous audio generation and transformation tasks, Fugatto is the first foundational generative AI model that showcases emergent properties — capabilities that arise from the interaction of its various trained abilities — and the ability to combine free-form instructions.
“We wanted to create a model that understands and generates sound like humans do,” Rafael Valle, a manager of applied audio research at Nvidia, said in a statement.
“Fugatto is our first step toward a future where unsupervised multitask learning in audio synthesis and transformation emerges from data and model scale,” he added.
Nvidia noted the model is capable of handling tasks it was not pretrained on, as well as generating sounds that change over time, such as the Doppler effect of thunder as a rainstorm passes through an area.
The company added that unlike most models, which can only recreate the training data they’ve been exposed to, Fugatto allows users to create soundscapes it’s never seen before, such as a thunderstorm easing into dawn with the sound of birds singing.
“Nvidia’s introduction of Fugatto marks a significant advancement in AI-driven audio technology,” observed Kaveh Vahdat, founder and president of RiseOpp, a national CMO services company based in San Francisco.
“Unlike existing models that specialize in specific tasks — such as music composition, voice synthesis, or sound effect generation — Fugatto offers a unified framework capable of handling a diverse array of audio-related functions,” he told TechNewsWorld. “This versatility positions it as a comprehensive tool for audio synthesis and transformation.”
Vahdat explained that Fugatto distinguishes itself through its ability to generate and transform audio based on both text instructions and optional audio inputs. “This dual-input approach enables users to create complex audio outputs that seamlessly blend various elements, such as combining a saxophone’s melody with the timbre of a meowing cat,” he said.
Additionally, he continued, Fugatto’s capacity to interpolate between instructions allows for nuanced control over attributes like accent and emotion in voice synthesis, offering a level of customization not commonly found in current AI audio tools.
“Fugatto is an extraordinary step towards AI that can handle multiple modalities simultaneously,” added Benjamin Lee, a professor of engineering at the University of Pennsylvania.
“Using both text and audio inputs together may produce far more efficient or effective models than using text alone,” he told TechNewsWorld. “The technology is interesting because, looking beyond text alone, it broadens the volumes of training data and the capabilities of generative AI models.”
Mark N. Vena, president and principal analyst at SmartTech Research in Las Vegas, asserted that Fugatto represents Nvidia at its best.
“The technology introduces advanced capabilities in AI audio processing by enabling the transformation of existing audio into entirely new forms,” he told TechNewsWorld. “This includes converting a piano melody into a human vocal line or altering the accent and emotional tone of spoken words, offering unprecedented flexibility in audio manipulation.”
“Unlike existing AI audio tools, Fugatto can generate novel sounds from text descriptions, such as making a trumpet sound like a barking dog,” he said. “These features provide creators in music, film, and gaming with innovative tools for sound design and audio editing.”
Fugatto deals with audio holistically — spanning sound effects, music, voice, virtually any type of audio, including sounds that have not been heard before — and precisely, added Ross Rubin, the principal analyst with Reticle Research, a consumer technology advisory firm in New York City.
He cited the example of Suno, a service that uses AI to generate songs. “They just released a new version that has improvements in how generated human voices sound and other things, but it doesn’t allow the kinds of precise, creative changes that Fugatto allows, such as adding new instruments to a mix, changing moods from happy to sad, or moving a song from a minor key to a major key,” he told TechNewsWorld.
“Its understanding of the world of audio and the flexibility that it offers goes beyond the mask-specific engines that we’ve seen for things like generating a human voice or generating a song,” he said.
Vahdat pointed out that Fugatto can be useful in both advertising and language learning. Agencies can create customized audio content that aligns with brand identities, including voiceovers with specific accents or emotional tones, he noted.
At the same time, in language learning, educational platforms will be able to develop personalized audio materials, such as dialogues in various accents or emotional contexts, to aid in language acquisition.
“Fugatto technology opens doors to a wide array of applications in creative industries,” Vena maintained. “Filmmakers and game developers can use it to create unique soundscapes, such as turning everyday sounds into fantastical or immersive effects,” he said. “It also holds potential for personalized audio experiences in virtual reality, assistive technologies, and education, tailoring sounds to specific emotional tones or user preferences.”
“In music production,” he added, “it can transform instruments or vocal styles to explore innovative compositions.”
Further development may be needed to get better musical results, however. “All these results are trivial, and some have been around for longer — and better,” observed Dennis Bathory-Kitsz, a musician and composer in Northfield Falls, Vt.
“The voice isolation was clumsy and unmusical,” he told TechNewsWorld. “The additional instruments were also trivial, and most of the transformations were colorless. The only advantage is that it requires no particular learning, so the development of musicality for the AI user will be minimal.”
“It may usher in some new uses — real musicians are wonderfully inventive already — but unless the developers have better musical chops to begin with, the results will be dreary,” he said. “They will be musical slop to join the visual and verbal slop from AI.”
With artificial general intelligence (AGI) still very much in the future, Fugatto may be a model for simulating AGI, which ultimately aims to replicate or surpass human cognitive abilities across a wide range of tasks.
“Fugatto is part of a solution that uses generative AI in a collaborative bundle with other AI tools to create an AGI-like solution,” explained Rob Enderle, president and principal analyst at the Enderle Group, an advisory services firm in Bend, Ore.
“Until we get AGI working,” he told TechNewsWorld, “this approach will be the dominant way to create more complete AI projects with far higher quality and interest.”
Nvidia has faced scrutiny this month because some servers with a whopping 72 Blackwell processors were overheating. The issue arose because some initial OEM deployments were not properly water-cooled, which Lenovo aggressively identified and mitigated with its Neptune warm water-cooling solutions.
As AI advances, we’ll need more highly dense, incredibly powerful AI processors, which suggests that air cooling in server rooms may become obsolete.
Let’s talk about Blackwell, water cooling, and why Lenovo’s Neptune solution stands out at the moment. We’ll close with my Product of the Week: Microsoft’s Windows 365 Link, which could be the missing link between PCs and terminals that could forever change desktop computing.
Blackwell is Nvidia’s premier, AI-focused GPU. When it was announced, it was so far over what most would have thought practical that it almost seemed more like a pipe dream than a solution. But it works, and there is nothing close to its class right now. However, it is massively dense in terms of technology and generates a lot of heat.
Some argue it is a potential ecological disaster. Don’t get me wrong, it does pull a lot of power and generate a tremendous amount of heat. But its performance is so high compared to the kind of load that you’d typically get with more conventional parts that it is relatively economical to run.
It’s like comparing a semi-truck with three trailers to a U-Haul van. Yes, the semi will get comparatively crappy gas mileage, but it will also hold more cargo than 10 U-Haul vans and use a lot less gas than those 10 vans, making it more ecologically friendly. The same is true of Blackwell. It is so far beyond its competition in terms of performance that its relatively high energy use is below what otherwise would be required for a competitive AI server.
But Blackwell chips do run hot, and most servers today are air-cooled. So, it shouldn’t be surprising that some Blackwell servers were configured with air cooling and those with 72 or more Blackwell processors on a rack overheated. While 72 Blackwells in a rack is unusual today, as AI advances, it will become more common, given Nvidia is currently the king of AI.
You can only go so far with air-cooled technology in terms of performance before you have to move to liquid cooling. While Nvidia did respond to this issue with a water-cooled rack specification that Dell is now using, Lenovo was way ahead of the curve with its Neptune water-cooling solution.
Lenovo was the first to realize this, mainly because it is currently the market leader in its class in terms of water cooling — a technology initially acquired from IBM, which has been doing water cooling for decades.
What is important with water cooling isn’t just the technology but the knowledge of how to deploy it safely. Mixing water and high-amperage electronics can be a disaster if you don’t know what you’re doing. As a result of the IBM server acquisition, Lenovo has decades of water cooling experience that it calls Neptune.
Given Nvidia has specified a water-cooled rack, what makes Neptune better? The answer is experience. Most that will use the Nvidia-specified solution, including Nvidia, don’t often deploy water-cooled solutions. As a result, particularly with these high-end Blackwell implementations, they’ll essentially be learning on the job.
It can be really dangerous when you mix water with high-amperage electronics. Water and electricity don’t mix. Not only can a leak fry an expensive part or even an entire rack, but if a person is present, it can fry them, too, if the breakers don’t set in. In a raised-floor environment, unless it has been designed with leaks in mind, terrible things can happen.
I observed this myself decades ago when I was at IBM, and it turned out they hadn’t stress-tested the water-cooling system for our massive (for the time) data center. The site lost a transformer that shut off the water-cooling system, which hadn’t been stress-tested for a sudden stop. The pipes burst, and the data center became a dangerous swimming pool. Most of the hardware, costing hundreds of millions of dollars, was lost, and the building was flooded, doing additional damage.
Through experiences like this, IBM became the leading OEM for safe water cooling, and Lenovo acquired that knowledge and experience when it bought the IBM x86 server group. Now, Lenovo, along with IBM, knows how to do water cooling better than most, which means that you can rest assured that a Lenovo Blackwell server won’t overheat or suddenly begin to leak.
Plus, Lenovo’s expertise is in warm water cooling, a far safer and far less expensive way to cool servers than cold water cooling, which requires huge, inefficient evaporators or chillers.
Implementing this technology is no trivial task. Unlike automobiles or PCs that are water-cooled, servers have to have hot swapping capabilities, which means you need exceptional and highly tested drip-free connections, aggressive alerting, preventive maintenance schedules based on past knowledge of components, and technicians experienced with working with this level of water-cooling tech.
Blackwell is only the first of these incredibly powerful processors to hit the market because as AI pushes the envelope, Nvidia’s competitors will also have to push into something similar, suggesting all servers may eventually need to be warm water cooled.
That positions Lenovo nicely for a water-cooled future regardless of the technology while Lenovo’s competitors try to catch up. One benefit I expect techs to look forward to is the reduction in data center noise. The amount of air you have to push through air-cooled servers is massive and turns today’s data centers into a noise nightmare.
As warm-water cooling moves into the market more aggressively, these data centers will quiet down, making them far more pleasant places to work. That will make many of us who have to work in them very happy.
Ever since we replaced terminals with PCs, IT has wanted the terminal experience back. Terminals were like pre-smart TVs in that you didn’t have to do patches or OS upgrades or deal with the “blue screen of death.” If the thing broke, it was pretty easy to fix or was relatively inexpensive to replace. From an IT perspective, terminals were a ton better than PCs.
But on the PC side, terminals sucked. You couldn’t run what you wanted to run without getting IT support, and it could take months for IT to respond to a request.
Terminals were connected to aging mainframes that couldn’t run modern applications at the time (they can now). New applications were usually custom-built, but a gap in communication between users and IT frequently led to problems. Users struggled to articulate their needs, and IT often failed to probe for better specifications, resulting in frequently unusable applications.
Well, at Microsoft Ignite last week, Microsoft announced the Windows 365 Link, which may be the closest thing to a perfect wired (there’s no laptop solution yet) terminal with PC-like features and performance.
While we call the class a thin client, Microsoft calls this a Cloud PC. At $349 and the size of a micro-PC, it appears to have the closest we’ve seen in terms of a near-perfect PC/terminal blend.
Windows 365 Link will be more reliable, cheaper, secure, and far smaller than most desktop PCs, making it very attractive for IT. At the same time, it connects to a Cloud PC instance, providing the user with a very PC-like experience.
It only targets enterprise accounts right now, mainly because they have the greatest need and the necessary infrastructure. I see this moving to markets like travel, education, government, manufacturing, and other vertical markets with similar needs. Although it doesn’t yet address mobile users, fully deployed 5G and the coming 6G specification should allow future mobile implementations.
Given Microsoft was one of the companies that launched the PC and made terminals obsolete, it seems ironic — and poetic — that Microsoft takes the lead in making them obsolete, eventually. We’ll see if that happens. For now, the Windows 365 Link is my Product of the Week.
Not all Linux distributions provide platforms for enterprise and non-business adopters. Red Hat Enterprise Linux (RHEL) and the Fedora Project let users keep their Linux computing all in the family.
Both the enterprise and community versions have upgraded over the last few weeks. RHEL is a commercial distribution available through a subscription and does not rely solely on community support. On the other hand, Fedora Linux is a free distro supported and maintained by the open-source community. In this case, Red Hat is the Fedora Project’s primary sponsor.
However, independent developers in the Fedora community also contribute to the project. Often, Fedora Linux is a proving ground for new features that ultimately become part of the RHEL operating system.
While Fedora caters to developers and enthusiasts, RHEL focuses on delivering enterprise-grade solutions. What’s the difference? Each edition caters to the needs of users’ business or consumer goals. Of course, using Fedora comes at a great price: it is free to download.
On Nov. 13, Red Hat released Red Hat Enterprise Linux 9.5 with improved functionality in deploying applications and more effectively managing workloads across hybrid clouds while mitigating IT risks from the data center to public clouds to the edge. Matt Miller, the Fedora Linux project leader, announced the release of Fedora 41 on Oct. 29.
According to global market intelligence firm IDC, organizations struggle to strike a balance between maintaining their Linux operating system environments and their workloads, which are hampered by time and resource constraints. The proliferation of the cloud and next-generation workloads such as AI and ML worsen their computing productivity.
RHEL standardization increased the agility of IT infrastructure administration management teams by consolidating OSes, automating highly manual tasks such as scaling and provisioning, and decreasing the complexity of deployments. As a result, infrastructure teams spent 26% more time on business and infrastructure innovation, Red Hat noted.
RHEL 9.5 delivers enhanced capabilities to bring more consistency to the operating system underpinning rapid IT innovations. This impacts the use of artificial intelligence (AI) in edge computing to make these booming advancements an accessible reality for more organizations.
Enterprise IT complexity is growing exponentially, fueled by the rapid adoption of new technologies like AI. This growth affects both the applications Red Hat develops and the environments in which they operate, according to Gunnar Hellekson, VP and GM for Red Hat Enterprise Linux.
“While more complexity can impact the attack surface, we are committed to making Red Hat Enterprise Linux the most secure, zero-trust platform on the market so businesses can tackle each challenge head-on with a secure base at the most fundamental levels of a system. This commitment enables the business to embrace the next wave of technology innovations,” he told LinuxInsider.
The release includes a collection of Red Hat Ansible Content subscriptions that automate everyday administrative tasks at scale. The latest version also adds several new system roles, including a new role for sudo, a command-line utility in Linux, to automate the configuration of sudo at scale.
By leveraging this capability, users can execute commands typically reserved for administrators while proper guardrails ensure rules are managed effectively. With automation, users with elevated privileges can implement sudo configurations securely and consistently across their environments, helping organizations reduce complexity and improve operational efficiency.
Increased platform support for confidential computing enables data protection for AI workloads and lowers the attack surface for insider threats. By preventing potential threats from viewing or tampering with sensitive data, confidential computing allows enterprises to have more opportunities to use AI more securely to review large amounts of data while still maintaining data segmentation and adhering to data compliance regulations.
The Image Builder feature advances a “shift left” approach by integrating security testing and vulnerability fixes earlier in the development cycle. This methodology delivers pre-hardened image configurations to customers, enhancing security while reducing setup time. The benefit of these built-in capabilities is the ability to configure without being security experts.
Management tools simplify system administration. Users can automate manual tasks, standardize deployment at scale, and reduce system complexities.
New file management capabilities in the web console allow routine file management tasks without using the command line, simplifying actions such as browsing the file system, uploading and downloading files, changing permissions, and creating directories.
Another benefit addresses cloud computing storage. Container-native innovation at the platform level fully supports Podman 5.0, the latest version of the open-source container engine. It gives developers a powerful tool for building, managing, and running containers in Linux environments.
According to Greg Macatee, research manager for infrastructure software platforms and worldwide infrastructure research at IDC, companies using the new RHEL release validated that the platform simplified management while reducing overall system complexity.
“They also noted that it radically reduced the time required for patching while simplifying kernel modifications and centralizing policy controls. They further called out the value of automation, better scalability, and access to Red Hat Enterprise Linux expertise,” he told LinuxInsider.
Application streams provide the latest curated developer tools, languages, and databases needed to fuel innovative applications. Red Hat Enterprise Linux 9.5 includes PG Vector for PostgreSQL, new versions of node.js, GCC toolset, Rust toolset, and LLVM toolset.
While Java Dev Kit (JDK) 11 reached its end of maintenance in RHEL 9, this new release continues supporting customers using it. The new default JDK 17 brings new features and tools for building and managing modern Java applications while maintaining backward compatibility to keep JDK upgrades consistent for applications and users.
The Fedora Project calls its community operating system Fedora Workstation. It provides a polished Linux OS for laptop and desktop computers and a complete set of tools for developers and consumers at all experience levels.
Fedora Server provides a flexible OS for users needing the latest data center technologies. The community also has Fedora IoT for foundation ecosystems, Fedora Cloud edition, and Fedora CoreOS for container-focused operations.
According to Miller’s announcement in the online Fedora Magazine, Fedora 41 includes updates to thousands of packages, ranging from tiny patches to extensive new features. These include a new major release of the command-line package management tool, DNF (Dandified YUM), which improves performance and enhances dependency resolution.
The Workstation edition offers various desktop environment options, known as spins, including Xfce, LxQt, and Cinnamon. It also introduces a new line of Atomic-flavored desktops, which streamline updates by bundling them into a single image that users install, eliminating the need to download multiple package updates.
Fedora Workstation 41 is based on Gnome 47. One of its main changes is its customization potential. In the Appearance setting, you can change the standard blue accent color of Gnome interfaces, choosing from an assortment of vibrant colors. Enhanced small-screen support helps users with lower-resolution screens see optimized icons scaled for easier interaction and better visibility on smaller screens, and new-style dialogue windows enhance usability across many screen sizes.