Catégories
Tech News

Best Internet Providers in Eugene, Oregon

What is the best internet provider in Eugene?

While options in Eugene may be limited, Xfinity stands out as the top internet service provider, according to CNET, offering the lowest starting prices, fast download speeds and no equipment fees on most plans. It’s widely available across the city, providing fast and reliable internet access at nearly every address.

Though not as widespread as Xfinity, Hunter Fiber delivers the fastest internet speeds in Eugene. As the only fiber provider in the area, it offers symmetrical upload and download speeds.

T-Mobile Home Internet is another solid alternative, with download speeds ranging from 87 to 415Mbps — ideal for small to medium-sized households. Plus it comes with a price-lock guarantee. Discover more about Eugene’s leading broadband providers below.

Best internet in Eugene, Oregon

Eugene internet providers compared

Provider Internet technology Monthly price range Speed range Monthly equipment costs Data cap Contract CNET review score
CenturyLink
Read full review
DSL $55 Up to 100Mbps $15 (optional) None None 6.7
Hunter Fiber Fiber $60-$120 500-2,500Mbps None None None N/A
T-Mobile Home Internet
Read full review
Fixed wireless $50-$70 ($35-$55 with eligible mobile plans) 87-415Mbps None None None 7.4
Xfinity
Read full review
Cable $20-$90 150-2,000Mbps $15 (included in most plans) 1.2TB Optional 7

Show more (0 item)

Source: CNET analysis of provider data.

What’s the cheapest internet plan in Eugene?

Provider Starting price Max download speed Monthly equipment fee
Xfinity Connect
Read full review
$20 150Mbps $15 (optional)
Xfinity Connect More
Read full review
$40 300Mbps $15 (optional)
T-Mobile Rely Home Internet
Read full review
$50 ($35 with eligible mobile plans) 318Mbps None

Show more (0 item)

Source: CNET analysis of provider data.

Sean Pavone / Getty Images

How to find internet deals and promotions in Eugene

The best internet deals and the top promotions in Eugene depend on what discounts are available during that period. Most deals are short-lived, but we look frequently for the latest offers. 

How many members of your household use the internet?

Eugene internet providers, such as Xfinity may offer lower introductory pricing or streaming add-ons for a limited time. Many, however, including CenturyLink, Hunter Fiber and T-Mobile Home Internet, run the same standard pricing year-round. 

For a more extensive list of promos, check out our guide on the best internet deals.

Fastest internet plans in Eugene

Provider Starting price Max download speed Max upload speed Data cap Connection type
Hunter Fiber Ultimate 2.5G $120 2,500Mbps 2,500Mbps None Fiber
Xfinity Gigabit Extra
Read full review
$90 2,000Mbps 300Mbps 1.2TB Cable
Hunter Fiber Pro 1G $80 1,000Mbps 1,000Mbps None Fiber
Xfinity Gigabit
Read full review
$75 1,000Mbps 120Mbps 1.2TB Cable

Show more (0 item)

Source: CNET analysis of provider data.

What’s a good internet speed?

Most internet connection plans can now handle basic productivity and communication tasks. If you’re looking for an internet plan that can accommodate videoconferencing, streaming video or gaming, you’ll have a better experience with a more robust connection. Here’s an overview of the recommended minimum download speeds for various applications, according to the FCC. Note that these are only guidelines — and that internet speed, service and performance vary by connection type, provider and address.

For more information, refer to our guide on how much internet speed you really need.

  • 0 to 5Mbps allows you to tackle the basics — browsing the internet, sending and receiving email, streaming low-quality video.
  • 5 to 40Mbps gives you higher-quality video streaming and videoconferencing.
  • 40 to 100Mbps should give one user sufficient bandwidth to satisfy the demands of modern telecommuting, video streaming and online gaming. 
  • 100 to 500Mbps allows one to two users to simultaneously engage in high-bandwidth activities like videoconferencing, streaming and online gaming. 
  • 500 to 1,000Mbps allows three or more users to engage in high-bandwidth activities at the same time.

How CNET chose the best internet providers in Eugene

Internet service providers are numerous and regional. Unlike the latest smartphone, laptop, router or kitchen tool, it’s impractical to personally test every ISP in a given city. So what’s our approach? We start by researching the pricing, availability and speed information, drawing on our own historical ISP data, the provider sites and mapping information from the Federal Communications Commission at FCC.gov.

But it doesn’t end there. We go to the FCC’s website to check our data and ensure we consider every ISP that provides service in an area. We also input local addresses on provider websites to find specific options for residents. We look at sources, including the American Customer Satisfaction Index and J.D. Power, to evaluate how happy customers are with an ISP’s service. ISP plans and prices are subject to frequent changes; all information provided is accurate as of publication.

Once we have this localized information, we ask three main questions:

  1. Does the provider offer access to reasonably fast internet speeds?
  2. Do customers get decent value for what they’re paying?
  3. Are customers happy with their service?

While the answer to those questions is often layered and complex, the providers who come closest to “yes” on all three are the ones we recommend. When it comes to selecting the cheapest internet service, we look for the plans with the lowest monthly fee, though we also factor in things like price increases, equipment fees and contracts. Choosing the fastest internet service is relatively straightforward. We look at advertised upload and download speeds and also take into account real-world speed data from sources like Ookla and FCC reports. (Ookla is owned by the same parent company as CNET, Ziff Davis.)

To explore our process in more depth, visit our how we test ISPs page.

Internet providers in Eugene FAQs

What is the best internet service provider in Eugene?

Xfinity is the best internet service provider in Eugene. It’s the only wired connection widely available in the city, and it offers the cheapest prices of any provider. That said, you’ll have to deal with data caps on every plan, and prices increase after one or two years. 

Is fiber internet available in Eugene?

Yes, fiber internet is available to 22% of Eugene residents, according to FCC data — primarily through Hunter Fiber. 

Who is the cheapest internet provider in Eugene?

Xfinity is the cheapest internet provider in Eugene, with plans starting at $20 a month for 150Mbps download speeds. Prices on this plan increases to $67 monthly after the introductory period. 

Which internet provider in Eugene offers the fastest plan?

Hunter Fiber offers the fastest internet plan in Eugene, with upload and download speeds of 2,500Mbps. 



2024-12-16 06:21:00

Catégories
Tech News

What is inferencing and training in AI?

Artificial intelligence (AI) might seem like a machine learning (ML) magician casting spells behind the scenes, but even maestros must learn their magic. That’s where training and inferencing come in – the dynamic duo transforming AI from a clueless apprentice to a master predictor. You can think of training as the intense cram session where AI models absorb everything they can from data, while inferencing is their time to shine – putting all that know-how into action.

Have you ever wondered how AI “gets you” with those strangely accurate streaming suggestions or how a chatbot feels one step ahead of your questions? It’s all thanks to training and inferencing – the tag team behind everything from face recognition to digital assistants.

In a moment, we’ll crack the code on how AI trains, predicts, and, yes, even “makes cents” of complex data. With years of tech expertise and a talent for simplifying complex topics, we’re here to help you understand the magic behind AI with trusted insights and explanations.

What is training in AI?

Training teaches an AI model how to make sense of data, like a digital boot camp for machines. It’s where the magic happens, as the AI evolves from a blank slate into something that can recognize stop signs, recommend the next binge-worthy show, or even compose poetry of questionable quality.

Unlike inferencing – where AI applies its smarts to solve new problems – training is all about learning. You can think of it as the study mode where the system dives into massive datasets, figures out patterns, and hones its abilities.

For instance, to teach a model to spot stop signs, developers feed it millions of labeled images of stop signs taken in different conditions – sunshine, rain, weird angles, you name it. After enough examples, the AI becomes a stop-sign spotting pro, ready to hit the road.

However, AI training isn’t a one-size-fits-all deal. It typically starts with pre-training, where the model builds its general knowledge, like learning the alphabet. Then there’s fine-tuning, the next level where it specializes in a specific task, like writing code or helping you win trivia night. But training isn’t just about dumping data into a machine and hoping for the best. It requires three core ingredients:

  1. A solid AI model architecture – It’s the brainpower behind the scenes. Whether a basic algorithm or a deep neural network, this design determines how well the AI can learn patterns and handle real-world tasks.
  2. High-quality, labeled data – AI needs accurate, well-labeled data to learn effectively. If you’re teaching it to recognize cats, it needs thousands of correctly labeled “cat” images. Better data means smarter AI.
  3. Heavy-duty computing power – Training AI requires powerful hardware like GPUs or TPUs to process massive amounts of data and do it quickly. It’s heavy-duty computing, often handled by specialized data centers or cloud computing systems.

Better training builds smarter AI, allowing it to perform with precision during inferencing. Without it, AI wouldn’t do much more than sit there, collecting dust.

Next, let’s explore the different methods AI uses to get trained and how this training shapes the systems we use every day.

What are the types of training in AI?

Like selecting the right tool for a specific task, training an AI model involves choosing the right method to match the goal. Let’s take a look at the most popular AI training techniques and the ways they influence the systems shaping our modern world.

Supervised learning

Think of supervised learning as a teacher guiding the AI step by step. It uses labeled data, where each input comes with a correct answer. The AI learns by example, making it great for tasks such as detecting spam emails. It’s calculated, systematic, and ideal for moments when accuracy takes center stage.

Unsupervised learning

This training type lets AI roam freely, discovering hidden gems in raw data without a guide. With no labels to follow, it uncovers patterns, groups similar behaviors, and spots trends you didn’t even know existed. Perfect for clustering customer habits or revealing insights buried in big data, it’s all about letting AI play detective.

Semi-supervised learning

This method is the best of both worlds, combining the structure of supervised learning with the freedom of unsupervised learning. It starts with a handful of labeled data for guidance and then dives into the vast sea of unlabeled data to refine its skills. It’s a smart way to handle complex tasks, like text classification, where a little guidance paves the way for remarkable results.

Reinforcement learning

It’s all about trial and error. The AI learns by interacting with an environment, receiving rewards for good decisions and penalties for bad ones. Over time, it figures out the best strategies to maximize success. It’s what makes those “you might like” suggestions so spot-on.

Deep neural networks

Deep neural networks mimic the human brain with layers of interconnected nodes. They excel at handling complex relationships and making sense of diverse data. Whether it’s voice-activated assistants like Siri or image recognition systems, deep neural networks are the powerhouse behind many modern AI applications.

Linear regression

This one shines in its simplicity, using relationships between variables to forecast results. It’s a reliable tool for straightforward tasks, like forecasting sales with a predictive line.

Logistic regression

This takes prediction one step further by focusing on binary outcomes – yes or no, true or false. It’s commonly used in finance and healthcare, making decisions like loan approvals or spotting fraud.

Decision trees

Decision trees are like a flowchart for making decisions, with each branch guiding you to a specific result. They’re great for straightforward tasks, like assessing loan applications.

Random forest

A random forest is like having a team of decision trees work together. By pooling their insights, it avoids overfitting and provides more accurate predictions – good for tasks like predicting customer behavior from multiple data sources.

Transfer learning

Why start from scratch when you can borrow some expertise? Transfer learning takes a pre-trained model and adapts it to a new task. For instance, an image recognition model initially trained on general objects can be fine-tuned to identify specific items like medical anomalies.

Generative models

Last but certainly not least, we have generative models. These do more than just analyze data – they create it. These models can generate new content, like images or text, based on the patterns they’ve learned. For instance, chatbots like ChatGPT, Gemini, and Jasper are superb examples of generative AI in action.

With the right training, AI models can accomplish incredible feats, from diagnosing diseases to predicting customer preferences – and much more.

The AI training process

Training an AI model is a multi-step journey that shapes raw data into a decision-making wonder. Let’s delve into each part of this exciting process:

1. Data collection and preparation

Data is the lifeblood of AI, and collecting it is the first step in building a smart model. In finance, this could mean collecting data like credit histories, economic trends, and court records. These data points help train the model to understand individual risk markers, making it savvy enough to offer loan approvals or predict financial trends.

To simplify this complex process, a data fabric acts as a unifying tool, seamlessly integrating data from multiple sources into a cohesive, accessible system.

2. Data pre-processing

Once the data is collected, it’s time to get it ready for action. This stage involves cleaning and formatting the data to ensure it’s accurate, complete, and compatible with the AI model. Plus, here’s where we tackle bias head-on, ensuring the model doesn’t pick up any bad habits. By removing skewed data, you pave the way for a learning process that’s both fair and ethically sound.

3. Model selection

Different challenges require different approaches. Reinforcement learning models shine in scenarios like business forecasting, where trial and error help the model improve over time. Meanwhile, deep learning models excel at identifying patterns in images, documents, or text, thanks to their powerful neural networks. Your choice depends on factors like task complexity, resource availability, and the level of precision you need. Choosing wisely sets the stage for AI’s future success.

4. Training algorithms and techniques

With the model selected, the fun begins – training time. During training, the model goes through multiple iterations, making predictions, and refining them based on feedback.

It’s like assembling a puzzle – early pieces might not fit perfectly, but persistence reveals the complete picture. Each step refines the model’s performance toward perfection.

5. Evaluation

After training, it’s test day – time to see if our AI “student” has learned its lessons well. If it passes with flying colors, the model is ready to tackle real-world tasks. If not, don’t sweat it – just like retaking a tricky test, you may need to revisit some parts of the training process.

Keeping the model sharp requires constant check-ups, especially when it encounters curveballs or ventures into uncharted data territories.

What is inferencing in AI?

Inferencing is where the true impact of AI comes alive. During this stage, data scientists feed the model massive datasets to help it recognize patterns, relationships, and outcomes. It’s a labor-intensive process that involves trial-and-error adjustments and examples of desired results. Take, for instance, an AI system designed to spot counterfeit currency – it learns from thousands of annotated images of real and fake bills.

This stage also includes fine-tuning, where a pre-trained model is further specialized for specific tasks, like recognizing anomalies in financial transactions or understanding medical images.

Inferencing vs training: What’s the difference?

AI training and inferencing are two sides of the same coin, each serving a specific purpose in the AI lifecycle.

Training is where an AI model begins its journey, learning from a mix of input examples and desired outputs through trial and error. This foundational phase helps the model grasp the essentials of its task – whether it’s recognizing patterns, making decisions, or forecasting outcomes.

Once training is complete, the model enters the inferencing phase. Here, it takes its learned knowledge and applies it in real time to make predictions or decisions based on new data. The better the model’s training and fine-tuning, the more accurate its inferences will be – though no system is completely foolproof.

In short, training builds the foundation, while inferencing brings that knowledge to life in practical, real-world applications.

The inferencing process

Inferencing is when AI puts its training into practice, tackling real-world challenges like predictions and data analysis. But first, it needs thorough preparation to ensure success.

Preparing and deploying models

Every successful AI journey begins with solid preparation. Before inferencing can begin, datasets need to be cleaned and organized to ensure the model isn’t tripped up by duplicate entries or messy formatting.

Once training is complete and the model has been thoroughly tested for accuracy, biases, and security issues, it’s time for deployment. This involves integrating the model into its real-world environment, setting up infrastructure, and training your team to make the most of it. In short, this is the AI equivalent of boot camp – it gets your model battle-ready.

Inferencing techniques

AI inferencing isn’t a one-size-fits-all process – it’s more of a mix-and-match game. Techniques like pruning remove unnecessary parts of a neural network, trimming the fat to make it faster and more efficient. Layer fusion is another popular approach, combining multiple steps into one to streamline operations.

It’s a bit of like compressing a high-res image into a cute little JPEG – still sharp and functional, just lighter and quicker to process. Thanks to these techniques, applications like spam filters, image recognition, and virtual assistants can work smoothly, even on everyday devices.

Optimizing inferencing performance

Here’s where your AI gets turbocharged. GPUs, with their ability to handle billions of calculations at lightning speed, are the unsung heroes of inference. They ensure models perform fast, accurate predictions without breaking a sweat. But performance optimization doesn’t stop there – ongoing monitoring and adjustments help your model stay sharp as it tackles new challenges. After all, even the top-notch AI needs a little care to keep it from glitching out under pressure.

Now, armed with an understanding of training and inferencing, let’s discover how they drive everyday AI applications.

What are the real-world applications of training and inferencing?

AI is no longer a futuristic dream – it’s here, living in our apps, gadgets, workplaces, and beyond. Training and inferencing aren’t just buzzwords – they’re the backbone of AI’s impact on the world around us. But what do these processes look like in action?

Let’s dive into real-world examples to see how training builds intelligence, and inferencing brings it to life.

Examples of training

Training is where the AI magic begins – feeding AI models with oceans of data to uncover patterns, relationships, and structures. Depending on the complexity of the task, this process can stretch over weeks or even months. Here’s a look at how training unfolds in the real world:

  • Healthcare: AI models, for instance, are trained on thousands of CT scans to spot early signs of lung cancer. The training phase ensures these models learn to distinguish between healthy and abnormal scans with precision, potentially saving lives.
  • Manufacturing: Volvo harnesses the power of training, using historical performance data from vehicles to predict component failures or when maintenance is needed. It’s like giving AI a masterclass in engineering to keep your car running smoothly.
  • Creative arts: IBM’s Chef Watson was fed thousands of recipes and flavor profiles, allowing it to come up with mind-blowing dish ideas. Similarly, AI models trained on datasets of music can inspire new songs by understanding themes and patterns in music, acting as a muse for artists.
  • Social media: Platforms like Facebook and Instagram train their AI systems on billions of user interactions to personalize recommendations and detect inappropriate content. It’s a bit like having a super-smart assistant that knows what you want to see (and what not to see) in your feed.

The training phase lays the groundwork for inferencing, giving models the knowledge to shine in their intended roles. Now, let’s explore real-world examples of inferencing.

Examples of inferencing

Inferencing is where AI comes to life in the real world – putting all the know-how it’s gathered during training into action to make decisions, deliver insights, or enhance systems. Let’s dive deeper:

  • Consumer goods: Smart speakers like Amazon Echo and Google Home use inferencing to understand and respond to your voice commands in no time. By analyzing speech patterns and context, these devices deliver personalized answers and help with tasks like setting reminders, playing music, or checking the weather.
  • Financial services: American Express relies on inferencing to catch fraudulent transactions almost instantaneously. By spotting patterns and anomalies in real time, AI models help prevent losses and protect customers from fraudsters.
  • Energy: GE Power uses inferencing to monitor power plants, analyze sensor data to predict when maintenance is needed, and optimize operations for maximum efficiency and reliability.
  • Media: Netflix harnesses inferencing to suggest shows and movies based on your viewing history. By analyzing your habits, it crafts highly personalized recommendations, ensuring you always have something new to watch.
  • Retail: Walmart’s AI tools, like the “Scan and Go” app, use inferencing to enhance shopping experiences by analyzing customer behavior and preferences to offer real-time solutions. It’s like having a personal shopping assistant right in your pocket, making your in-store experience more enjoyable.

Inferencing is the crucial stage where AI transforms from a trained model into a dynamic tool that can solve real-world challenges. In the next chapter, we’ll explore some of the most popular tools and frameworks used to develop, train, and deploy these AI models.

The AI landscape is packed with cutting-edge tools and frameworks, perfect for everything from academic exploration to more practical industrial applications. Here’s a quick look at some of the most popular options:

OpenNN

Open Neural Networks (OpenNN) is a powerful C++ library that brings neural networks to life. Its high performance and efficiency make it a top choice for research applications and AI systems that need to make decisions based on complex data. Thanks to its C++ roots, OpenNN excels in handling large datasets quickly and efficiently. This makes it perfect for projects that require fast processing speeds.

It supports various neural network types, like multilayer perceptrons, radial basis function networks, and probabilistic neural networks. With its modular architecture, researchers and developers can easily tweak and expand its functionality to fit their specific needs.

While it might have a steeper learning curve compared to some Python-based libraries, its power and flexibility make it a rock-solid tool for advanced AI development.

OpenAI

OpenAI has established itself as a leader in AI innovation with its diverse range of tools and models. The GPT series, in particular, stands out, pushing the boundaries of natural language processing and generation. But OpenAI’s platform is more than just text – it’s a hub for tools that enable everything from image generation to text-to-speech.

The real beauty of OpenAI’s tools is how easy they are to use. Whether you’re a hobbyist tinkering at home or part of a large enterprise, the user-friendly nature of OpenAI’s tools makes it easy to integrate powerful AI capabilities into your projects. Yes, there’s a robust free tier but unlocking more advanced features and larger models will require a premium subscription.

PyBrain

If you’re looking for a versatile, lightweight machine-learning library, PyBrain is the way to go. It’s ideal for researchers, educators, and developers who want a simple, flexible environment for diving into machine learning.

What sets PyBrain apart is its modular design, making it easy to construct and adjust neural network architectures. It supports a variety of learning methods, from supervised to unsupervised, offering great flexibility for different projects. Although it may lack community support found in more mainstream libraries, its simplicity and user-friendly approach make it a solid tool for newcomers and those looking to prototype quickly.

IBM Watson

IBM Watson brings a powerful suite of AI and machine learning services to the table, making it a go-to for almost any AI-powered project. With features like natural language processing, computer vision, and predictive analytics, all wrapped up in IBM’s Cloud, Watson is a reliable and high-performing choice for businesses in the healthcare, finance, and retail sectors.

Watson’s pre-built APIs and services make it incredibly easy for businesses to tap into AI without needing a lot of in-house expertise. This seamless integration, coupled with IBM’s extensive experience in enterprise technology, turns Watson into a powerhouse for everyone from small startups to large enterprises. However, the pricing may be something to consider for smaller projects.

CNTK

Microsoft Cognitive Toolkit (CNTK) is a robust, open-source deep learning framework developed by Microsoft. Its standout features include impressive efficiency and scalability, making it a superb choice for research and production alike. It shines when handling large-scale models, which is a big advantage for data scientists and researchers working on projects that demand computational efficiency.

This toolkit supports a wide variety of neural network architectures, from feedforward and convolutional to recurrent networks, offering plenty of flexibility for various deep learning tasks. With its Python API, CNTK is easily accessible to developers who are comfortable with Python, allowing them to tap into its capabilities effortlessly. Although CNTK is more challenging to learn than some alternatives, its performance and advanced features make it a great choice.

Serious challenges and future directions

Training AI models comes with its fair share of challenges and data bias is a big one. Diversity in training data is essential to prevent biased predictions and unfair outcomes.

Computing power and infrastructure are also significant challenges. As models become more complex, they need robust infrastructure and plenty of computational resources. The model you choose should match the resources you have on hand to prevent serious setbacks.

Overfitting is another common headache. When models get too tuned into their training data, they struggle to generalize to new situations. Tackling this involves using techniques like regularization, cross-validation, and early stopping.

Explainability is a growing pain for many AI systems. Many models still operate like black boxes, making it tough for users to understand their decision-making processes. While tools to enhance explainability are improving, they’re not yet universally accessible.

For AI inference, latency can be a real buzzkill, especially for real-time applications. Reducing latency means optimizing your models and hardware to achieve quicker response times without sacrificing accuracy.

Scalability is another challenge. AI systems need to handle increasing volumes of data and requests without falling behind. Cloud computing and distributed microservices are crucial for maintaining performance as applications grow.

Balancing accuracy and speed is a delicate dance. High-accuracy models are often slower, which can be a problem for applications that require fast responses. Techniques like model pruning and quantization can help strike the right balance between speed and accuracy.

Tackling these challenges involves a mix of technical know-how and practical strategies to keep up with AI’s rapid advancements.

The ongoing evolution of AI

As AI continues to evolve, so do the tools that empower us to harness its power for just about anything. From enhancing privacy to smarter decision-making, AI is transforming the way we live and work. Training and inference lie at the heart of this progress, each demanding innovative solutions for issues like data diversity and performance optimization.

With every step forward, our ability to train and deploy AI improves. The future seems bright – we just need to make sure the AI doesn’t overfit on its own optimism. So, whether you’re fine-tuning its inference skills or training it to conquer new challenges, there’s no better time to plug into the AI revolution.

We’ve listed the best AI tools.

2024-12-16 07:53:52

Catégories
Tech News

Astro Bot is 2024’s Game of the Year. Now What?

It ain’t a December without the Game Awards, Geoff Keighley’s annual trailer showcase and occasional awards ceremony celebrating the year’s biggest games. More than previous years, this year’s nominees for the top Game of the Year prize were diverse mix of heavy hitters like Final Fantasy VII Rebirth and Metaphor: ReFantazio and surprise darlings Balatro and Black Myth Wukong. And the winner turned out to be…Astro Bot, the PlayStation 5 exclusive platformer from Team Asobi.

We’ve had the Game Awards for a decade now, and it’s got a bit of a type when it comes to its GOTY choices: previous games like Witcher 3, God of War’s 2018 reboot, and Baldur’s Gate 3 were these fantastical or mature titles, and Astro Bot is a game geared toward families about a cute robot with a big head exploring worlds and making friends with cute Funko-sized versions of PlayStation characters. It’s a very good game, but its win was definitely surprising, especially when Balatro had taken over the world for months and Metaphor was also taking off in its own way. Now that the dust has settled, the question on folks’ minds is what it means for a platformer to win a top award in what’s a big, mainstream awards ceremony for the medium.

©Team Asobi/PlayStation

The immediate next step is almost surely going to be a sequel of some kind; Team Asobi’s currently putting out free post-launch levels for Astro Bot, which has sold 1.5 million copies as of this past November. No doubt Sony’s going to let Asobi cook, and whenever the next game’s revealed, I hope the Astro series takes its first step toward forging its own identity. Both Bot and its predecessor Astro’s Playroom (which came pre-installed onto launch PS5s) have been sightseeing tours of PlayStation’s past. It’s very fun to see tiny, big-headed versions of childhood mascots, but the constant winking can wear thin kind of fast, and highlights how the series has nothing else going for it beyond references and its awesome gameplay mechanics. An argument can made it doesn’t need anything more, but having a unique personality is what’s helped other platformers endure. Of the big three publishers, Nintendo’s really the only one to fully realize that, and it’s why Mario can maintain such a general consistency, even when the character (and Luigi) do platformers while bouncing between a dozen other genres at any given moment. So if Sony wants a Mario of their own, that’ll mean figuring out who Astro is beyond a blank slate wearing the skin of its old series.

Speaking of franchises, one of the other big surprises at the Game Awards came from Capcom, which revealed it was developing new games for the Okami and Onimusha franchises. The Resident Evil studio’s had a really good streak of releases lately, and recently stated the two projects are part of a larger effort to revitalize series it previously shelved. You have to imagine Sony’s looking at that alongside Astro Bot, too—a lot of PlayStation franchises get some love in that game, and fans have been hoping for years some of them get dusted off. Many have also hoped the collapse of Concord alongside Astro’s success has provided a wakeup call for Sony to focus on making smaller, more diverse games instead of putting their eggs in the cinematic, triple-A basket. It’s been a problem within the industry for years, particularly for PlayStation’s first-party teams, and not recognizing that problem sooner is how it wound up laying off over 1,000 developers this year and canceling several projects.

It’s a shame PlayStation’s devotion to blockbusters led to the erosion of the double-A genre, because it’s where many of their older franchises would likely work best nowadays. (See also: Ubisoft’s pretty good Prince of Persia: Lost Crown from earlier this year.) Lately, I’ve been replaying Sly 2: Band of Thieves after it was ported to the PS5, and doing a first-time Resistance 2 playthrough via cloud streaming. Both feel just right for their individual eras, and it’s easy to imagine their respective series could have a place in the current PlayStation pantheon if they were allowed to just be without massive expectations thrust upon them. Sly could easily fill the niche for stylish, personality-filled games in the wake of last year’s Hi-Fi Rush, and Resistance or Killzone could be a good system-exclusive shooter—last year, Sony tried to stop Microsoft from buying Activision Blizzard by arguing Call of Duty was too valuable, and that no other shooter could hope to top it. Ideally, neither would have to aim for such lofty heights, but serving as PlayStation’s answer to Halo, which itself is about to undergo a second or third revamp, would be appreciated. As it stands, any hopes for mid-size, non-Nintendo games with brand recognition now kind of fall squarely on Astro’s tiny, delicate shoulders.

© Team Asobi/PlayStation

Team Asobi’s not expected to use Astro Bot to fix everything wrong with triple-A games overnight, it just has the unfortunate luck of arriving as the industry’s reckoning with several years worth of calculated risks not paying off as expected. Making a healthier industry will take time, and its impact will be felt sooner or later, even if it’s just in its own sequel or an indie game that hopes to capture some of its unbridled, non-corporate spirit.

Or failing that, PlayStation could stand to dial back on the remasters and remakes and just please put more of its older first-party titles on PC or natively on the PlayStation 5.

Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

2024-12-15 18:40:49

Catégories
Tech News

The best stocking stuffer gifts under $50

It never hurts to have an extra bag with you whenever you go out. Whether you’re going to pick up groceries and know you’ll need one, or you’re heading out with friends and unexpectedly pick up a few things, a lightweight tote bag is crucial. There are thousands of options out there, but we included Peak Design’s Packable tote in our gift guide because it doesn’t have the typical reusable bag design and it remains fairly affordable at only $20.

The bag is made of 100-percent recycled ripstop nylon, which is resilient and as well as water resistant, and it has a zip closure, which is something most other reusable bags don’t have. It’ll keep your items more secure thanks to that, and it’s easier to carry in different ways thanks to its single shoulder/hand strap that sports microfiber padding for extra comfort. We also like that it has an interior pocket that can hold a phone, wallet or keys, and it takes up a surprisingly little amount of space when it’s packed into itself. — V.P.

2024-12-13 17:05:14

Catégories
Tech News

Today’s NYT Mini Crossword Answers for Dec. 16

Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands and Connections puzzles.


The New York Times Mini Crossword pulled out one of my favorite groaner jokes today. I’m pretty sure I’ve seen a variation of this clue printed on a T-shirt. It’s something like, « Use commas, or else ‘I like cooking, family and friends’ turns into ‘I like cooking family and friends.' » OK, OK, it’s kind of dumb, but to an English major, it’s a little bit funny. Read on for more NYT Mini Crossword hints and answers. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.

The Mini Crossword is just one of many games in the Times’ games collection. If you’re looking for today’s Wordle, Connections and Strands answers, you can visit CNET’s NYT puzzle hints page.

Read more: Tips and Tricks for Solving The New York Times Mini Crossword

Let’s get at those Mini Crossword clues and answers.

The completed NYT Mini Crossword puzzle for Dec. 16, 2024.

NYT/Screenshot by CNET

Mini across clues and answers

1A clue: Petting zoo animal
Answer: GOAT

5A clue: Important feature of the sentence « I like cooking, family and friends »
Answer: COMMA

6A clue: Labor alliance
Answer: UNION

7A clue: « All ___ are off! »
Answer: BETS

8A clue: Triage centers, for short
Answer: ERS

Mini down clues and answers

1D clue: One who’s doomed to fail
Answer: GONER

2D clue: Leaves out
Answer: OMITS

3D clue: « Famous » cookie maker
Answer: AMOS

4D clue: Original color of peanut M&Ms, introduced in 1954
Answer: TAN

5D clue: Rubik’s ___
Answer: CUBE

How to play more Mini Crosswords

The New York Times Games section offers a large number of online games, but only some of them are free for all to play. You can play the current day’s Mini Crossword for free, but you’ll need a subscription to the Times Games section to play older puzzles from the archives.



2024-12-16 04:00:07

Catégories
Tech News

This new 3D printing technology could make housing construction faster and more efficient

  • Streamlined 3D printing process reduces downtime for multiple buildings
  • Eco-friendly construction using 99% locally sourced materials
  • Modular design allows customization for diverse project needs

As global demand for housing and infrastructure grows, traditional construction methods often struggle to keep pace. However, the rise of 3D printing technology is set to transform the sector by enabling faster, more cost-effective, and eco-friendly building processes.

COBOD International, which has over 80 3D construction printing operations worldwide, says it has taken a significant leap forward with the launch of its BOD3 3D Construction Printer.

The company says BOD3 is its most advanced 3D construction printer to date, being designed to print with real concrete, and also introducing features which promise to enhance efficiency, reduce costs, and streamline the construction of low-rise buildings across various settings.

A new benchmark in 3D construction printing

The BOD3 has already been deployed globally, with operational units in countries such as Indonesia, Angola, and Bahrain, with these early implementations reducing downtime between projects and speeding up construction times, meaning the printer can handle high-volume construction projects.

BOD3 comes with an advanced, extendable ground-based track system which allows the printer to operate continuously along the Y-axis, meaning it can print multiple buildings one after another without the need for reinstallation, reducing setup time and making the printer highly efficient for large-scale construction sites where multiple structures need to be erected.

This new model comes with a modular design, allowing it to be customized to the specific needs of different construction projects. The BOD3 comes equipped with an operational stand that allows operators to control and monitor the 3D printer and its supplementary equipment through a single, integrated system. It also has a Universal X-Carriage for the integration of additional tools such as those for insulation, painting, and sanding.

This printer also comes with an Advanced House Management System (AHMS) which minimizes the need for manual labour by ensuring a smooth material flow via secured hoses, enhancing the overall efficiency of the construction process.

According to the company, BOD3 can print with 99% locally sourced materials, reducing the need for transporting expensive and environmentally costly resources. In partnership with Cemex, COBOD has also developed the D.fab solution, which allows traditional concrete to be adapted for 3D printing. This reduces the amount of binder required, making the construction process faster and eco-friendly.

« The global housing crisis demands a more efficient construction solution that is faster, more efficient, and scalable. The BOD3 is our answer to this challenge. Drawing on years of research and expertise, we’ve designed the BOD3 with innovative features, making it our most cost-effective and efficient model yet for multiple low-rise buildings, » said Henrik Lund-Nielsen, Founder and General Manager of COBOD.

« Its design supports high-volume, linear production of houses, enabling mass production without compromising quality. The fact that six units have already been sold before its official launch speaks volumes about the BOD3’s market demand and the trust our customers place in our technology.”

You might also like

2024-12-16 03:45:00

Catégories
Tech News

Ho, Ho, Ho, Return of the Living Dead Rises Again!

We’ve had quite a few zombie franchises in the last decade, but one that hasn’t been seen in a while is the Return of the Living Dead films. The last movie dropped all the way back in 2005, but it seemed things were set to change when a new movie was announced in 2023. Fortunately for fans, the series is officlally returning next year, just in time for the first film’s 40th anniversary.

After a handful of teaser images on Friday, the studio released a teaser for the next film, also titled Return of the Living Dead. Instead of being a reboot as initially believed, it’s actually a continuation of the original movie: set almost two years later in winter 1985, the Tarman zombie’s back in action and looking to make some chaos in an unnamed Pennsylvania town where the new film’s set. Living Dead said not to expect any returning character from the first movie, given their “terrible fates,” but teased audiences will be “welcomed back into the dark humored, sexy, edgy, Trioxin fueled world horror fans first encountered.”

The first Return of the Living Dead focused on a town infected by zombies that suddenly rose up after a toxic gas accidentally reanimated all of its corpses. In a press release, director/writer Steve Wolsh built up the teaser by revealing it was made entirely with practifcal effects and on-location, and the cemetery Tarman walks through was built to replicate the first film’s Resurrection Cemetery. “Utilizing our widescreen anamorphic lenses, we captured the practical snow effects swirling against the night sky that gives it an amazing look and texture,” he said. “It’s going to blow people away seeing an entire film made like this.”

Return of the Living Dead shambles to theaters in Christmas 2025.

[via BloodyDisgusting]

Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

2024-12-15 20:05:38

Catégories
Tech News

Hackers may have stolen hundreds of thousands of Rhode Islanders’ sensitive info in RIBridges cyberattack

Hackers behind a cyberattack that targeted Rhode Island’s public benefits system were able to get the sensitive data — including Social Security numbers and some banking information — of hundreds of thousands of people, and they have threatened to release it as soon as this week if they aren’t paid a ransom, Rhode Island governor Dan McKee said in a press conference on Saturday night. The Rhode Island government opened a toll-free hotline on Sunday (833-918-6603) to provide information on the breach and how residents can protect themselves, but you won’t be able to find out for sure if your data was stolen by calling in. People who may have been affected will be notified by mail.

The attack targeted the RIBridges system, maintained by Deloitte, which is used to apply for Medicaid, Supplemental Nutrition Assistance Program (SNAP), Temporary Assistance for Needy Families (TANF), Child Care Assistance Program (CCAP), HealthSource RI healthcare coverage and other public benefits available to Rhode Islanders. A press release from McKee’s office notes that “any individual who has received or applied for health coverage and/or health and human services programs or benefits could be impacted by this leak.”

It’s thought the hackers were able to get information including names, addresses, dates of birth, Social Security numbers and “certain banking information.” Deloitte first detected the breach and notified state officials on December 5, and determined on the 11th that there was “a high probability that the implicated folders contain personal identifiable data from RIBridges.” It confirmed the presence of malicious code on December 13 and subsequently shut the system down, before officials announced the attack to the public the same day.

The system is now offline while Deloitte works to secure it, which means that anyone who needs to apply for one of the affected programs will have to do so by mail, and people who are currently enrolled won’t be able to access the online portal or app. The state said it so far hasn’t detected any identity theft or fraud relating to the attack, but it will be offering free credit monitoring to anyone affected by the breach.

2024-12-15 19:46:21

Catégories
Tech News

Best Gifts for Grandparents This Holiday Season 2024

When it comes to shopping for grandparents, you want to get them something nice that shows you put genuine thought into your gift. But what? Our gifting experts rounded up the best gifts for grandparents this holiday season that are sure to please.

2024-12-16 02:30:00

Catégories
Tech News

NYT Connections today — my hints and answers for Monday, December 16 (game #554)

Good morning! Let’s play Connections, the NYT’s clever word game that challenges you to group answers in various categories. It can be tough, so read on if you need clues.

What should you do once you’ve finished? Why, play some more word games of course. I’ve also got daily Strands hints and answers and Quordle hints and answers articles if you need help for those too, while Marc’s Wordle today page covers the original viral word game.

SPOILER WARNING: Information about NYT Connections today is below, so don’t read on if you don’t want to know the answers.

NYT Connections today (game #554) – today’s words

(Image credit: New York Times)

Today’s NYT Connections words are…

  • LIGHT BULB
  • SNOWBALL
  • HAM
  • CHICKEN
  • PILLOW
  • YODEL
  • BAR
  • FOOD
  • KNOCK-KNOCK
  • WALKIE-TALKIE
  • DING DONG
  • WATER BALLOON
  • SATELLITE
  • DEVIL DOG
  • AM
  • HOHO

NYT Connections today (game #554) – hint #1 – group hints

What are some clues for today’s NYT Connections groups?

  • YELLOW: Listening 
  • GREEN: Fun scraps
  • BLUE: Sweet treats
  • PURPLE: Typical gags

Need more clues?

We’re firmly in spoiler territory now, but read on if you want to know what the four theme answers are for today’s NYT Connections puzzles…

NYT Connections today (game #554) – hint #2 – group answers

What are the answers for today’s NYT Connections groups?

  • YELLOW: TYPES OF RADIO
  • GREEN: KINDS OF PLAY FIGHTS
  • BLUE: SNACK CAKES
  • PURPLE: CLASSIC JOKE STAPLES

Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON’T WANT TO SEE THEM.

NYT Connections today (game #554) – the answers

(Image credit: New York Times)

The answers to today’s Connections, game #554, are…

  • YELLOW: TYPES OF RADIO AM, HAM, SATELLITE, WALKIE-TALKIE
  • GREEN: KINDS OF PLAY FIGHTS FOOD, PILLOW, SNOWBALL, WATER BALLOON
  • BLUE: SNACK CAKES DEVIL DOG, DING DONG, HOHO, YODEL
  • PURPLE: CLASSIC JOKE STAPLES BAR, CHICKEN, KNOCK-KNOCK, LIGHT BULB

  • My rating: Moderate
  • My score: 3 mistakes

A Connections word walks into a bar.

The barman says: “Sorry we don’t serve your type here.”

I struggled with blue today. I knew DEVIL DOGS were a type of cake, but I couldn’t think of the other two until a vague memory of a Simpsons episode came to mind – one where Homer has to choose between a winning lottery ticket or a YODEL and choses a Yodel. Mmm… Yodels.

This LIGHT BULB moment got me over the line on my final guess.


Yesterday’s NYT Connections answers (Sunday, 15 December, game #553)

  • YELLOW: SPICES CLOVE, MACE, NUTMEG, PEPPER
  • GREEN: PERFORM POORLY FLAIL, FLOP, FLOUNDER, TANK
  • BLUE: SKIM THROUGH, AS PAGES FLIP, LEAF, RIFFLE, THUMB
  • PURPLE: POP SINGERS MINUS « S » KEY, MAR, SPEAR, STYLE

What is NYT Connections?

NYT Connections is one of several increasingly popular word games made by the New York Times. It challenges you to find groups of four items that share something in common, and each group has a different difficulty level: green is easy, yellow a little harder, blue often quite tough and purple usually very difficult.

On the plus side, you don’t technically need to solve the final one, as you’ll be able to answer that one by a process of elimination. What’s more, you can make up to four mistakes, which gives you a little bit of breathing room.

It’s a little more involved than something like Wordle, however, and there are plenty of opportunities for the game to trip you up with tricks. For instance, watch out for homophones and other word games that could disguise the answers.

It’s playable for free via the NYT Games site on desktop or mobile.

2024-12-16 00:02:00

Quitter la version mobile