Thursday, November 20, 2025
Home Blog Page 11

What is IPS Screen (Monitor)?

0

Contents

What is IPS Screen (Monitor)?

IPS screen is the abbreviation of the words “In-Plane Switching”. It is a type of LCD display panel technology. IPS displays stand out with many advantages such as wide viewing angles , accurate color reproduction and fast response time . This makes them ideal for professional users such as graphic designers , photographers , video editors and gamers .

How Does IPS Technology Work?

On IPS displays, the alignment of pixels is controlled differently. In this way, the same color accuracy and image quality is achieved when viewing the screen from any angle .

In traditional TN displays , the alignment of pixels is limited to the right angle . Therefore, colors and image quality deteriorate when viewing the screen from side angles .

The working principle of IPS displays is based on controlling pixels with electrical signals. These signals cause the pixels to rotate and produce the correct colors .

IPS and LED Monitor Differences

IPS and LED are two different technologies used in monitors. While IPS is panel technology , LED is backlight technology. A monitor can have both an IPS panel and LED backlight.

Differences between IPS and LED monitors:

Point of view:

  • IPS: Provides wide viewing angle. Colors and image quality do not deteriorate when the screen is viewed from any angle.
  • LED: Provides narrower viewing angle. Colors and image quality may deteriorate when viewing the screen from side angles.

Color Accuracy:

  • IPS: Offers a wider color gamut and reproduces colors more accurately.
  • LED: Offers a narrower color gamut and colors may not be as accurate as IPS.

Reaction Time:

  • IPS: It has a faster response time. It performs better in areas with fast moving images, such as gaming and video editing.
  • LED: May have slower response time. It can cause problems such as ghosting and blurring in areas such as gaming and video editing.

Brightness:

  • IPS: May offer lower brightness.
  • LED: Can offer higher brightness.

Energy efficiency:

  • IPS: Less energy efficient.
  • LED: More energy efficient.

Price:

  • IPS: It is more expensive.
  • LED: It is cheaper.

Summary:

  • IPS monitors offer advantages such as wide viewing angle , accurate color reproduction and fast response time .
  • LED monitors offer advantages such as higher brightness , energy efficiency and more affordable prices .
  • Which monitor is better for you depends on your needs and intended use.

What are the Advantages of IPS Monitor? 

  • Wide Viewing Angle: IPS (In-Plane Switching) technology offers a wider viewing angle compared to other display technologies. This ensures that colors and images are less distorted when the monitor is viewed from different angles.
  • Color Accuracy and More Vibrant Colors: IPS panels have high color accuracy. This makes colors appear more accurate and vibrant, which is ideal for jobs that require color precision, such as graphic design, photo editing and video editing.
  • Better Contrast Ratio: IPS displays can have higher contrast ratios, meaning deeper blacks and brighter whites. This improves overall image quality and delivers clearer and more distinct images.
  • Better Image Quality Preservation: IPS screens experience less color change than other technologies and provide a homogeneous image in different areas of the screen. This feature minimizes chromatic aberration on the screen and provides a more consistent image.
  • Faster Response Time: Traditionally, IPS displays have had slower response times than TN (Twisted Nematic) panels. However, with developing technology, the response times of IPS screens have also accelerated and they are now more suitable for content containing fast movements such as games.

These advantages mean that IPS displays generally stand out with features such as color accuracy and wide viewing angles.

Usage Areas of IPS Screens

IPS displays have a wide range of uses and can be used in a variety of industries and applications. Here are some of the common uses of IPS displays:

  • Professional Graphic Design and Photo Editing: Because of their color accuracy and wide color gamut, IPS displays are ideal for graphic design, photo editing, and other professional work that requires color precision. It is important that colors are displayed accurately and consistently, and IPS displays meet this requirement.
  • Video Editing and Production: In the video editing process, displaying the correct colors and images accurately is critical. IPS displays are widely preferred in video editing and production processes due to their color accuracy and wide viewing angle.
  • Medical Imaging and Radiology: Monitors used in medical imaging devices and radiological applications are generally based on IPS technology. IPS displays enable accurate display of sensitive medical images, which is critical in diagnosis and treatment processes.
  • Engineering and CAD/CAM Applications: Engineering design, CAD (Computer-Aided Design) and CAM (Computer-Aided Manufacturing) applications require detailed and accurate imaging. IPS screens offer users the opportunity to work with high resolution and accurate colors.
  • Office and Business Environment: IPS displays are widely preferred for general office use. They provide a wider viewing angle and comfortable viewing experience, making them ideal for long-term work.
  • Home Entertainment and Games: IPS displays are also popular for home entertainment. High color accuracy and wide viewing angle improve movie watching and gaming experience.

These usage areas show that IPS displays are versatile and widely preferred in many different industries and applications.

Things to Consider When Choosing an IPS Screen?

Some important factors to consider when choosing an IPS screen are:

  • Color Accuracy and Calibration: Color accuracy is one of the most important advantages of an IPS display. Choosing a display that maintains color accuracy is important for photo editing, graphic design, and other color-accurate work. Additionally, choosing a display with color calibration capability can further improve color accuracy.
  • Resolution: The resolution of the screen is important for the clarity and level of detail of the images. A high-resolution IPS screen delivers sharper and more detailed images.
  • Response Time: When choosing an IPS screen for fast-moving activities such as gaming, it is important to choose a model with a low response time. Low response time reduces blur in moving images and provides a smoother experience.
  • Viewing Angle: IPS displays have a wide viewing angle, which means there is no color change or distortion when viewing the screen from different angles. This feature is important in situations where multiple people need to look at the same screen.
  • Panel Uniformity: It is important that the IPS display provides equal brightness and color across its entire panel. Poor panel uniformity may cause color or brightness differences in some areas of the screen.
  • Connectivity Options: It’s important to choose an IPS display with the connection types you need. A display that offers a variety of connectivity options such as HDMI, DisplayPort, and USB-C allows you to easily connect your devices.
  • Size and Screen Surface: Screen size and surface type (matte or glossy) depend on personal preference. The size should be chosen depending on your intended use and desktop space. The screen surface type may require you to choose between matte screens that reduce reflection in bright light environments or glossy screens that offer more vivid images.

These factors are key considerations when choosing an IPS display and can help determine your preferences.

IPS Screen vs Other Screen Types Comparisons

FeatureIPSTNVAOLEDQLED
Point of viewWideNarrowNarrowWideWide
Color AccuracyTRUELess AccurateMore accuratePerfectGood
Reaction TimeFastFasterMore slowlyToo fastFast
ContrastGoodLessBetterPerfectGood
BrightnessLessMoreLessLessMore
Energy efficiencyMoreMoreLessLessLess
PriceExpensiveCheapExpensiveVery expensiveExpensive
Burn RiskNoneNoneNoneThere isNone

Best IPS Displays (2024)

General:

  • LG UltraGear 27GP950-B: 27 inch, 4K UHD, 160Hz, HDR 1000, Nano IPS
  • Dell Alienware AW3423DW: 34 inch, 3440×1440, 120Hz, HDR 1000, QD-OLED
  • Samsung Odyssey G70A: 27 inch, 2560×1440, 240Hz, HDR 1000, VA
  • ASUS ROG Swift PG32UQX: 32 inch, 3840×2160, 144Hz, HDR 1400, Mini-LED

Game:

  • ASUS ROG Swift PG32UQXE: 32 inch, 3840×2160, 160Hz, HDR 1600, Mini-LED
  • LG UltraGear 27GN950-B: 27 inch, 3840×2160, 144Hz, HDR 1000, Nano IPS
  • Samsung Odyssey Neo G9: 49 inch, 5120×1440, 240Hz, HDR 2000, Mini-LED

Professional:

  • Apple Pro Display XDR: 32 inch, 6016×3384, 60Hz, HDR 1600, Retina 5K
  • Dell UltraSharp U3223QE: 32 inch, 4K UHD, 60Hz, HDR 10, IPS
  • LG UltraFine 32UN880-B: 32 inch, 4K UHD, 60Hz, HDR 10, Nano IPS

Budget:

  • LG 27GN850-B: 27 inch, 2560×1440, 144Hz, HDR 10, Nano IPS
  • Samsung Odyssey G5: 27 inch, 2560×1440, 144Hz, FreeSync Premium, VA
  • AOC 24G2: 24 inch, 1920×1080, 144Hz, FreeSync Premium, IPS

Factors to consider when making your choice:

  • Screen size: How big a screen do you want?
  • Resolution: How sharp an image do you want?
  • Refresh rate: You may want a higher refresh rate for a smoother viewing experience.
  • HDR: You may want HDR for a wider color gamut and contrast ratio.
  • Panel type: IPS panels are best for color accuracy and wide viewing angles.
  • Price: How much do you want to spend?

The Basis of Performance Measurement: What is Graphics Card Testing and How is It Done?

0
Graphics Card

Contents

Graphics card testing is a process that measures the graphics performance of a computer. Tests often include 3D graphic scenes and intense visual effects. The main purpose of the test is to evaluate the rendering capacity, heating status and overall performance of the graphics card. Users can measure their computer’s graphics capabilities by running the testing program of their choice. Test results are typically presented with metrics such as FPS (Frame Per Second), graphics memory usage, and temperature. Users can optimize performance by understanding the power of their graphics cards with programs.

What is Graphics Card Test?

Graphics card Test is a process to evaluate the graphics performance of a computer or gaming system. These tests; It aims to measure how efficiently graphics cards work during computer games, graphic design applications or general computer use . The graphics card stress test evaluates graphics performance by checking the hardware’s drivers and compatibility. Intensive with 3D graphics, the tests measure the rendering capacity and speed of the graphics card. It also determines the overall performance of the graphics card by taking into account factors such as memory bandwidth, pixel speed and graphics memory usage. These tests provide users with guidance on understanding the limits of graphics cards, comparing their performance and getting the best graphics experience.

Card Test

Why Should You Perform a Graphics Card Test?

Graphics card testing is an important step for a computer user because graphics performance is a determining factor during games, graphic design applications and even general computer use. These tests evaluate the performance of the graphics card while checking its hardware and software compatibility. Thanks to graphics card performance testing, users can understand their computer’s graphics capabilities, identify potential limitations, and make decisions about upgrading or replacing when necessary.

For gamers, the performance of the graphics card in games is critical. Therefore, regular testing, using up-to-date graphics drivers and compatible hardware is important to ensure the best gaming experience. Graphics card testing is an indispensable tool for users who want to improve the overall performance of their computer parts and comply with current technology standards.

When Should Graphics Card Tests Be Done for a Healthy Computer?

In order to ensure the sustainable performance of a healthy computer and to detect possible problems in advance, the graphics card health test should be performed at certain intervals. Computer users, especially those who frequently use graphics-intensive applications or games, should perform graphics card tests regularly. In addition, when the operating system or video card drivers of gaming computers are updated or a new installation is made, a significant decrease in computer performance may be felt. In such cases, it is recommended to test the graphics card. These tests are an important tool for checking the driver compatibility of the video card, detecting potential errors and evaluating graphics performance. Thus, users’ video card fault detection can ensure that the computer operates healthy and with optimum performance.

Unleash the Graphics Power: How to Test a Graphics Card?

It is very important to perform a graphics card processor compatibility test to discover the graphics power and evaluate the performance of your computer. To do this, the first step you need to take is to choose a suitable program. Popular options include 3DMark, Heaven Benchmark, and FurMark. After downloading and installing the program of your choice, you can measure the graphics performance of your video card by starting the test.

During testing, you will encounter various graphic scenes and intense visual effects. These images allow you to evaluate the rendering capacity and overall performance of the graphics card. Test results are presented with a variety of performance metrics and scores so you can see the power of your graphics card in numbers.

Graphics card tests are useful for getting better performance and detecting potential problems in graphics-intensive applications such as computer games, graphic design or video editing. With graphics card bottleneck testing, it helps you achieve the best performance for your needs by unleashing the graphics power of your computer.

graphics card

What should be taken into consideration when testing a graphics card?

Paying attention to some important factors when testing a graphics card is critical to making accurate assessments and optimizing your computer’s graphics performance. Important factors to consider when testing a graphics card are:

  • Reliable Program Selection: It is important to choose known and reliable graphics card testing programs. Popular tools such as 3DMark, Heaven Benchmark, FurMark usually provide reliable results.
  • Thermal Check: It is important to monitor the condition of the computer during the graphics card temperature test. Tests requiring high performance may cause the graphics card to overheat.
  • Stability Check: If any crashes, graphics errors or performance drops are observed during testing, this may constitute a problem and need to be resolved.
  • Evaluation of Test Results: If you are considering optimizing your graphics card based on the test results, it will be useful to pay attention to the recommendations and settings offered by the program used.

Graphics card tests performed by paying attention to these factors help you accurately evaluate the graphics performance of the computer and detect possible problems in advance.

Does Graphics Card Test Harm the Computer?

Generally, properly performed graphics card tests will not harm your computer. Some situations and misuse may cause potential risks. For example, your graphics card may be damaged in case of overheating. Therefore, it is important to monitor the temperature of the computer while running the tests.

It is important to use reliable and known graphics card testing programs. Software downloaded from unknown or untrusted sources may contain malware that can harm your computer.

When the right programs are used and the temperature of the computer is monitored, graphics card tests can usually be done safely. In case of any doubt, the safest approach would be to seek professional help or use testing tools recommended by the manufacturer.

Graphics Card

Analysis and Measurement: How to Evaluate Graphics Card Testing?

Accurately analyzing and measuring graphics card test results is an important step for computer users. It is necessary to understand the metrics offered by the program in which the test is performed. These metrics typically include performance measurements such as average frame rate (FPS), graphics memory usage, latencies, and more. Graphics card FPS test; It is a critical factor in determining performance in games and graphics-intensive applications.

Another important measurement is the temperature of the graphics card. High temperatures during testing may require reviewing the cooling system or considering extra cooling solutions. Negative situations such as graphics errors, crashes or performance drops should also be taken into account and these situations should be evaluated according to the graphics card speed test results.

Evaluating the results guides the computer user in understanding the power and stability of the graphics card and any potential problems. This evaluation is important to optimize video card settings or troubleshoot problems if necessary.

What should be the temperature of the graphics card?

The graphics card temperature should generally be kept within a certain temperature range. This range may vary depending on the video card model and usage conditions. In general, it is considered normal for the graphics card temperature to be between 30 and 40 degrees when idle. During intensive graphics operations, this temperature may increase, but generally remains within safe limits.

Ideally, the graphics card temperature should be kept between 80 and 85 degrees. This temperature range prevents damage to the card while ensuring continued performance. Some graphics card models or gaming applications can withstand higher temperatures.

Users can use special software or tools provided by the card’s manufacturer to monitor the graphics card temperature. They can check the temperature status with a graphics card gaming test and take cooling measures when necessary. It is important to pay attention to the temperature of the graphics card, especially when working under long-term load.

What are the Tax Responsibilities of Freelancers?

0
Freelancers

Freelancing has become an attractive business model that offers flexibility and autonomy to employees. However, freelancers have different tax liabilities than those working in a traditional job. In this article, we will examine the tax liabilities and billing processes that apply to freelancers.

Tax Responsibilities of Freelancers

Freelancers must pay income tax on all income they earn. The income tax rate varies depending on the amount of income earned and is declared annually.

VAT stands for value added tax. Freelancers who exceed a certain turnover limit must be subject to VAT and must show VAT on their invoices. This may result in an additional tax liability for the services they provide to their clients.

Clients are required to withhold taxes from payments made to freelancers. Withholding taxes are considered a prepayment of income tax and are deducted directly from payments.

This helps freelancers meet their tax obligations and reduces the amount they have to calculate when filing taxes.

Freelancers should regularly monitor their income and tax liabilities. There are various deductions and exemptions, but this varies by industry and job. For this reason, freelancers may want to consider getting support from tax advisors.

How is Taxation Done in Freelance Work?

There are two main types of taxation for freelance work, and both involve different taxation methods. Freelance income is the most common taxation method for freelancers.

Freelancers declare their income as freelance income. In this case, the person reports the income earned directly on their income tax return and is taxed.

Freelancers may start to engage in commercial activities when they reach a certain size or want to expand their business. In this case, the taxation method changes because the person earns commercial income. 

Taxation on business income is a more complex process involving different elements of income and expenses. Those who earn business income must pay more attention to detail in taxation and accounting matters.

Freelancers must be careful during the taxation process and act in accordance with tax laws. Otherwise, if any irregularities are detected during the investigations, they may face serious payments and sanctions.

They should track their income and expenses regularly, prepare tax returns, and make payments on time. In this way, freelancers can fulfill their tax responsibilities and be financially secure.

Invoicing Process While Working Freelance

Freelancers are required to invoice their clients for the services they provide. The invoice process includes the following steps:

  • Billing Information:The invoice should include basic information such as the start date, due date, and invoice number. This information ensures the validity of the invoice and makes it easy to track.
  • Customer Information:The invoice must include customer information such as the customer’s name, address and tax number. 
  • Service Details:The invoice must include a description of the services provided or products sold.
  • Quantity and Pricing:The invoice must state the quantity and unit price of the service or product provided. This ensures that the amount the customer must pay is calculated correctly.
  • Taxation Information:The invoice must include the VAT rate and the calculated VAT amount. In addition, tax-related information such as the tax identification number or tax number must be included on the invoice.
  • Total Amount:The total amount of all services or products must be clearly stated on the invoice.

Freelancers can easily create invoices using accounting software or online invoice generators. Proper invoicing helps freelancers manage their financial records and meet their tax obligations.

Social Media Earnings Exemption

Starting from 2023, a tax exemption of up to 3 million TL has been introduced for earnings obtained through social media platforms. This exemption covers social media influencers, YouTubers, Twitch streamers and people who earn income by producing content on other platforms.

There are various conditions to benefit from the exemption. This exemption is also valid for 2024. In order to benefit from the exemption, you must first find out whether you meet the conditions. You can get help from a financial advisor for this. 

Then, you should report this to the tax office and open an account where your earnings will be taxed according to the notification you receive.

All of your social media earnings should be sent to this account. Freelancers who benefit from this exception cannot issue invoices, but they can document their income by issuing a contract. If the earnings exceed 3 million TL, this exception does not apply to the exceeding amount and is taxed differently.

How Do SSDs Affect Gaming Performance?

0
Gaming

Nowadays, playing online games on the computer has become quite common. Some users even started to earn income through the content they created using various social media applications thanks to the games they played.

However, just having an internet connection and a computer is not enough to play these games. Online games use a large portion of the storage space of the computer. Therefore, when you buy a new gaming computer, you should pay attention to the high storage capacity of the device.

Many factors such as high resolution games, 4K textures, or the ability to develop games with customizable characters cause the games to increase their capacity and take up more space on the memory.

Game enthusiasts want to install more than one game on their computers at the same time and play it whenever they want, so GB storage space is insufficient for them. For this reason, they require terabytes of space. One of the most important computer parts that will meet these needs is SSDs.

SSD is a spare part that is installed in the computer and has no possibility of movement. The most important function of SSDs is the ability to store files on the computer in the long term. SSDs, which are frequently used especially in gaming computers, make significant contributions to the performance of online games. The contributions of SSDs to gaming performance are as follows:

It allows the startup time of the computer’s operating system to be much faster on desktop or laptop computers with SSD installed, before even starting to play games. While uploading data may take a few minutes on normal computers, this process takes a much shorter time on computers with an SSD installed. In short, operating system installations occur much faster on computers with SSD.

Storage spaces are not only provided by SSDs. Besides SSDs, there are also HDDs. However, SSDs perform much faster operations than HDDs. The most obvious difference between these two storages is the time it takes for games to load.

Every time you open the game to play on computers with SSD, it takes much less time for the game to load, but on HDD it takes longer. In this way, you can avoid long waiting times to play games and enjoy pleasant times.

Computers with SSD offer users a smooth gaming experience. While users play games, thousands of small read and write operations occur within the game. These transactions consist of hundreds of megabytes of data.

The wide bandwidth provided by SSDs offers a unique gaming experience thanks to less delay and fast access to the desired data. You can enjoy the game uninterruptedly without any lag or freezing during the game.

Unlike SSDs, HDDs cause various problems because the reading and writing process takes longer and the magnetic plate causes lags due to delays. Since an interrupted gaming experience will also have negative effects on users, the use of SSD is recommended.
Conditions that increase gaming performance
It provides.

SSDs are very important in terms of providing better gaming performance. People who will install a computer for the first time should pay attention to the computer features. Choosing and using SSD is as important as choosing the correct CPU and GPU.

Faster loading of the computer operating system and higher loading and opening speeds of games saves time for users. Shorter waiting time before starting the game will greatly improve game performance and user experience. In addition, SSDs, which are less likely to fail than HDDs, offer a long and durable lifespan.

 

6 best practices for using DOMO properly

0
DOMO

This blog covers best practices to keep in mind when designing a BI dashboard using DOMO. Read carefully if you are considering creating a BI dashboard for your business unit.

Domo is a modern BI platform that has taken the world of data analytics by storm. It converts your data into insights and provides the right kind of context to make quick, data-driven decisions. At its core, it’s a cloud-based dashboard tool, and it provides data and visibility across all your data sources.

With the help of Domo’s visualization tool, you can see data from any aspect of your business. Domo connects directly to the data source and gives you crucial metrics with real-time information.

DOMO Best Practice #1: Know your audience

Before you create a dashboard, you need to know who will use it and how it will improve their performance. Users need to know where to look for data-driven business answers. Fortunately, Domo is structured in such a way that there are pages, subpages and collections. By using them effectively, the user will be able to easily navigate the tool and find the answers they were looking for.

DOMO Best Practice #2: Page-Level Architecture

Here are some recommendations for a highly optimized page-level architecture.

  1. Pages should move from the most macro view to the most micro view if they move left to right or top to bottom. Each collection should provide a granular perspective on metrics or categories of data.
  2. Maps should include alerts. Businesses need to set rules and conditions in their dashboard so they can be alerted directly if something goes wrong.
  3. Maps should be easy to understand.
  4. Every domo user should be able to organize maps and collections on a page with a personalized view so they can see what’s important to them.

DOMO best practice #3: Data sets

You should know which metrics provide the most value. Multiple data sets will be required if you want to achieve optimal results. The actions you plan should not only be limited by the data you currently have access to. Identify the most ideal metrics to monitor that will direct you to where you want to go in the future, even if the data for those metrics isn’t immediately available.

DOMO Best Practice #4: Custom Data Permissions (PDP)

Depending on the size of your organization, you need to determine PDP policies. The PDP policy will include information about who the data will be shared with. If sensitive data sets are present, then you need to configure specific PDP policies for this.

It is recommended to have a PDP policy on each DataSet to augment and enable a management by exception strategy. By doing so, you will gain the following.

  1. You will be able to control access to the page file. The recipient of the shared page will only be able to access it if it has been added to the PDP policy.
  2. With the help of PDP, you can customize what each individual or team can see and send them data specific to their designation.
  3. You can upload sensitive documents and use the PDP policy to restrict them to a limited number of people. This adds an additional layer of security.

DOMO Best Practice #5: Audit card collection

Add audit cards to your dashboard, the collection will help users manage by exception. An audit card collection is essentially a set of cards linked to actionable metrics where the ideal status is zero.

Let’s say there is a person who deals with customer requests, the goal is to resolve all problems. An audit card will indicate if the person has reached 0 pending requests and issues an alert when it exceeds 0.

The card must be designed so that the summary number of this audit card indicates 0 when no action is required. If the summary count has a number other than 0, the card owner should take action to bring the metric back to 0. In addition to setting up an audit card collection, you can add someone to the metric. team that will help you implement these actions.

DOMO good practice #6: Access rights

When onboarding people to Domo, make sure permissions are set correctly for access, capabilities, and what they can do with the information.

Participating users can only view the data. Here’s a complete overview of what they can do: view maps and pages, change page-level filters, view maps in a slideshow, collapse the map collection, and upload map pages to Microsoft PowerPoint.

Editor users have the capabilities of the Participant user, including: editing the data and content they have access to, sharing maps, and creating content. However, they cannot change the authorization status.

Privileged users can edit, access and delete cards. They can also access data, delete user accounts and assign security roles.

Administrator users have content access rights and user permissions.

Conclusion:

Domo is an all-inclusive platform and not just another self-service BI tool. It provides you with an analytics system that gives businesses a competitive edge. Domo integrates finance, operations, IT, sales, marketing and all other departments that use data to provide you with answers or solve problems by simplifying your data management. If you want to get the most out of Domo, we’ve outlined some of the best practices you should follow in this article.

If you are looking to take your business to the next level with the help of Business Intelligence (BI), you can check out Zuci’s Business Intelligence services. Our goal is not only to provide you with an immediate return on investment, but also to build you a system that provides long-term success. Schedule a 15-minute call to our Business Intelligence (BI) architects today .

What is Alive Monitoring (Ping Monitoring)? Explaining monitoring methods and efficiency methods

0
Monitoring

Alive monitoring is a method used to check whether a target node is running or stopped, and is essential for monitoring systems and networks. Alive monitoring can be said to be the most basic monitoring method.

In this article, we will provide an overview of aliveness monitoring, the reasons for its implementation, main monitoring methods such as methods using ping, and tools to make monitoring more efficient.

table of contents

  1. Overview of life-and-death monitoring
    1. What is Alive Monitoring (Ping Monitoring)?
    2. Reasons for carrying out life-and-death monitoring
    3. Points to note when monitoring life and death
    4. Subject to be monitored for life and death
  2. Main methods of life-and-death monitoring
    1. Ping monitoring
    2. Monitoring by watchdog
    3. port monitoring
  3. How to conduct life-and-death monitoring
    1. Perform manually
    2. Use monitoring tools
    3. Utilize agency services
  4. What is LogicMonitor that realizes integrated monitoring?
  5. summary

Overview of life-and-death monitoring

First, I would like to introduce an overview of life-and-death monitoring and the reasons for implementing it.

What is Alive Monitoring (Ping Monitoring)?

Alive monitoring refers to efforts to periodically check whether networks, servers, etc. are operating.

Alive monitoring involves checking communication with servers and networks, and confirming that there is a response to understand the operating status. Since the Ping command is generally used, alive monitoring is sometimes called Ping monitoring.

In alive monitoring, the only thing to be monitored is whether the monitored object is operating or not.

In general, it does not cover aspects such as whether the application is performing appropriate processing or whether processing results can be provided to the user without delay. Checking these aspects involves implementing other monitoring methods, such as application monitoring (APM) and front-end monitoring.

Reasons for carrying out life-and-death monitoring

By performing life-or-death monitoring, you can confirm whether or not a problem has occurred. If there is no response from the server or network when performing life-or-death monitoring, you can recognize that some kind of trouble has occurred.

Understanding the operating status of a system is one of the basics of system operation monitoring. When operating websites, business systems, etc., life-and-death monitoring is an essential effort.

On the other hand, life-and-death monitoring can also be considered the first step in system operation monitoring. Alive monitoring can only recognize situations where the system is not operating properly.

If we detect a problem through life-or-death monitoring, we will check to find out more specifically what kind of failure is occurring and what is the cause. Specifically, based on the information obtained from log monitoring, process monitoring, resource monitoring, etc., we will be able to understand in more detail the operating status of the system and investigate the causes of failures, and take measures accordingly. Masu.

Points to note when monitoring life and death

Since alive monitoring only determines whether the network or server is operating, it is necessary to take into account that even if no problems are found in alive monitoring, it does not necessarily mean that the system is operating normally. there is.

For example, a server might be running out of CPU resources, making your application slow and unable to provide a good experience to your users. There may also be cases where normal processing is not performed due to an application error. Such situations should be detected using other monitoring techniques, such as resource monitoring or log monitoring.

Subject to be monitored for life and death

Alive monitoring is mainly performed on servers, storage, and network equipment.

Regarding servers, not only the physical server equipment but also the virtual machines and containers installed within the server equipment are monitored. In addition, we may monitor the web server individually for each port. Regarding network devices, aliveness monitoring will be carried out targeting routers, switches, Wi-Fi access points, etc. Other equipment such as surveillance cameras and digital signage should also be subject to life-or-death monitoring.

Main methods of life-and-death monitoring

Below, we will introduce a specific method for carrying out life-or-death monitoring.

Ping monitoring

In many cases, life monitoring is performed using a command called ping.

Ping is a command program that can request a response from a device with a specific IP address on an IP network. Ping is widely used as an easy-to-use program due to its high convenience of being able to easily check communication with a target.

Ping complies with the ICMP (Internet Control Message Protocol) protocol defined in the TCP/IP protocol suite. Since the processing is not dependent on a specific vendor, it can be used as a standard regardless of the product vendor. Typical devices are programmed to respond to pings.

When you make a ping request to the specified IP address, you can receive a response if the device or server to which that IP address is set is operating normally.

At that time, the response includes the round trip time (the total time taken from sending the packet to the destination until receiving the response), packet loss rate, etc. If there is no response, a message such as “Request timed out” or “Host Unreachable” will be output. In this case, communication with the target is not possible.

It is important to note that even if there is no response, it does not necessarily mean that the target is stopped.

Ping requests are naturally made through the network, so if the network equipment between the request and the target is down or disconnected, the request itself will not be able to reach the target in the first place.

Monitoring by watchdog

For devices whose ping has been stopped for security reasons, a watchdog may be used to monitor the device’s aliveness.

Watchdog is a word that means “guard dog.” By installing a watchdog on the monitored device, it will periodically send packets to the reporting destination, just like a watchdog. If this packet is interrupted, it is assumed that something is wrong with the device.

Note that methods such as Ping, in which the monitoring side makes inquiries to the monitored device, are called “active monitoring,” and methods, such as watchdog, in which the monitored device sends information to the monitored device, are called “passive monitoring.” Sometimes.

port monitoring

In particular, when monitoring web servers, life-or-death monitoring is performed on ports.

A port is a socket that is set to distribute communications exchanged over IP to multiple applications. For example, port 80 is used for HTTP communication, and port 143 is used for IMAP, which is used for email. By specifying a port for communication, you can communicate with each application in a specific manner.

By performing alive monitoring on these ports, you can check the operating status at the level of the application corresponding to each port. For example, if you check connectivity to port 80 and there is no response, there is a possibility that some kind of problem has occurred in HTTP communication and your company’s website cannot be viewed.

Ping, mentioned above, is a program that operates on the network layer, so it cannot be used for port monitoring. Port monitoring is performed using the TCP protocol or UDP protocol that operates on the transport layer. Specifically, this is done using the Traceroute command.

How to conduct life-and-death monitoring

Next, we will introduce specific methods for carrying out life-or-death monitoring.

Perform manually

The most basic way to perform life-or-death monitoring is to manually execute a ping command or other command.

If the number of servers or network devices to be monitored is small, it is possible to perform this manually. In this case, one of the operational tasks is to periodically execute the Ping command, or visually check the operating status of a website.

This method is not impossible if there are only a few things to be monitored, but as the number of things to be monitored increases, it becomes difficult to do it manually. In that case, you may want to consider using the tools described below.

Use monitoring tools

One possible way to automate life-and-death monitoring is to introduce operational monitoring tools.

Alive monitoring can be performed automatically by setting the IP address of the server or network device to be monitored, the frequency of monitoring, etc. on the tool.

By introducing tools, you can not only automate life-and-death monitoring, but also streamline the entire monitoring process. Since it is not often that monitoring involves only alive monitoring, it can be said that the use of tools is effective from the perspective of streamlining other monitoring tasks as well.

With general tools, it is also possible to set up an alert to be raised if a response is not obtained during life monitoring. This allows you to recognize when an abnormality has occurred even if you are not constantly monitoring monitoring results.

Utilize agency services

If you find it difficult to monitor in-house due to lack of resources or skills, you may consider using a monitoring agency service.

This type of agency service is called a “MSP (Managed Service Provider)” and allows you to outsource your company’s overall management, including operation, maintenance, and monitoring.

However, the disadvantage of using an agency service is that it incurs a certain cost and that your company does not accumulate know-how. Nowadays, systems are recognized as the core of business, so we recommend that you carefully consider whether to outsource operational monitoring tasks.

What is LogicMonitor that realizes integrated monitoring?

In particular, as the scale of your company’s systems grows, building an efficient monitoring system becomes important in reducing workload and quickly responding to failures.

In this situation, you should consider adopting a monitoring tool that is effective in improving the efficiency of monitoring operations and reducing MTTR (Mean Time To Repair).

LogicMonitor, a SaaS-type IT integrated operation monitoring service, can centrally monitor all targets such as servers, networks, middleware, and applications. In addition to life-or-death monitoring, it supports a variety of monitoring items such as hardware monitoring, process monitoring, network monitoring, and log monitoring.

LogicMonitor has monitoring templates that support over 2,500 types of servers, network devices, OS, and middleware. By using these, you can efficiently implement operational design even when performing monitoring work for the first time.

In recent years, there has been a shift from on-premises to public cloud, and LogicMonitor can support both on-premises and cloud. Even if your company has various IT assets, you can centrally monitor them.

For more information about LogicMonitor, please also see the service documentation here.

summary

In this article, we have provided an overview of life-and-death monitoring, the main implementation methods, and tools to streamline monitoring operations, including life-and-death monitoring.

Especially in recent years, the importance of IT systems in business has increased. Under these circumstances, it is necessary to efficiently implement monitoring operations, including life-and-death monitoring, while ensuring stable system operation. Utilizing appropriate monitoring tools will help ensure stable system operation.

7 super powers you will gain with the chip revolution

0
powers

Tiny chips to be placed deep in the brain are no longer just dreams in science fiction movies, but are very close to becoming a part of the real world. Speaking any language fluently with your thoughts, unlimited access to information, taking your creativity to the top, constantly monitoring your health, strengthening your memory and many other amazing abilities.

In this article, we try to offer a look at these revolutionary changes that may be possible with the neurotechnology of the future. Are you ready to open the doors of this innovative world and discover how brain chips can transform human life?

1. Speak any language as if it were your native language

Future neurotechnology will radically change language learning and communication through brain chips. Imagine if a small chip would allow you to speak any language fluently, like your native language. This technology will offer not only translation but also the ability to understand nuances and cultural contexts.

 Indiana University’s Brainoware system combines real brain cells with a chip, achieving success in areas such as voice recognition and math problems. However, these studies are still unclear about the information processing and learning capacity of laboratory-grown mini-brain structures.

The revolution expected from Elon Musk’s Neuralink is to enable paraplegic individuals to communicate with computers with brain-computer interface chips. Musk claims that human languages ​​could be replaced by a single universal language within 5-10 years. This can make communication more effective, but it is also essential to consider ethical and psychological implications.

Researchers from Johns Hopkins University question the profound effects of this technology on humanity and the nature of meaning, and emphasize that the long-term consequences should be considered.

2. Carry a library in your brain

Brain chips have the power to transform the learning and information access methods of the future. This innovation could revolutionize traditional education methods by providing direct access to vast libraries of knowledge. Purdue University research shows that this technology works on artificial neural networks similar to the human brain, but much faster and more energy efficient, using biologically inspired algorithms. Neuralink aims to democratize brain-machine interfaces, establishing bi-directional communication between neurons and external devices and enabling a wide range of applications.

 As part of the European Human Brain Project, researchers from the universities of Heidelberg and Bern managed to train spiking neural networks for deep learning with high efficiency using the BrainScaleS-2 neuromorphic platform. This platform can process information a thousand times faster than the human brain, while consuming much less energy than traditional computer systems, which makes a significant contribution to its integration into large systems.

3. Creativity and problem-solving skills

The chips to be placed in your brain will not only increase your data processing ability, but also take your creativity to a new dimension. Thanks to neurotechnology, you will be able to draw from a wider pool of inspiration and ideas. This will allow you to perform complex calculations and analyses at unprecedented speed.

More accurate brain activity recording and analysis will improve your creative thinking and problem-solving strategies. From your daily life to your professional fields, this technology will revolutionize your capacity to generate new ideas and solve complex problems. This innovation, which will push the limits of creativity and finding solutions to problems, may be the new way to step into the future.

4. Personalized health monitoring and improvement

Future neurotechnology will revolutionize your life with personalized health monitoring and improvement. Chips placed in your body will save lives by constantly monitoring your health status, identifying possible health problems in advance and recommending diet and lifestyle changes to prevent them. Going even further, these chips will be able to communicate directly with medical professionals.

Epidermal wearable biosensors can measure the amount of physical effort and exercise efficiency through sweat analysis, while distinguishing between healthy and unhealthy states by monitoring body movements. EEG technology, on the other hand, can monitor athletes’ performance and mental state and evaluate their focus and performance under stress.

A study in Nature Medicine shows how wearable sensors can be used to predict personalized clinical laboratory measurements. These sensors are placed under the skin, continuously monitoring health and providing personalized health recommendations. This technology will revolutionize the field of healthcare and lead us to a healthier and more conscious future.

5. Super memory and information management

The boundaries of neurotechnology are constantly expanding, especially with brain chips. The latest research in this field will allow you to better understand and intervene in brain activities. One of the most exciting aspects of these developments is the potential increase of human memory and information processing capacity. Forgetfulness can become history and information can dance at your fingertips.

Your abilities such as solving complex problems, situational awareness and concentration will take a huge leap forward. However, the ethical and social risks that these technologies may bring should not be ignored. The potential of this technology pushes the limits of our imagination and promises hope for the future.

6. Amazing virtual and augmented reality experiences

In the field of virtual and augmented reality, the revolutionary development of brain-computer interfaces (BCI) points to a future that makes the real and virtual world indistinguishable. Companies are working on systems that make it possible to navigate the virtual world with thoughts. For example, EyeMynd and Neurable are developing thought-controlled virtual reality systems, while Elon Musk’s Neuralink aims to connect brain electrodes to the digital world.

 These technologies will increase the depth of AR/VR experiences, offering experiences where you will not be able to perceive the difference between the real and virtual worlds. Especially in the field of rehabilitation and neuroplasticity (the structuring and adaptation of the brain with new information), BCI systems can reshape the brain by converting brain signals into computer commands. These innovations can revolutionize a wide range of fields, from education to entertainment, from professional simulations to daily life practices.

7. Time management and productivity increase

Brain-computer interfaces will redefine time management and productivity. This revolutionary neurotechnology will improve your learning, memory and physical performance through the integration of your brain with computers. Current research points to the potential of this technology to improve attention, memory and learning, improve mood and enhance communication.

 Current systems convert brain signals into computer commands, promoting brain plasticity (learning flexibility), accelerating decision-making and problem-solving abilities. These advances are improving learning capabilities in education, increasing productivity in the workplace, and enabling everyday devices with brain control.

 BCI technology enables people to overcome challenges and maximize their potential, enabling them to accomplish complex tasks with less effort. This innovative approach has the potential to shape the business and education world of the future and radically change life.

Explaining how to reduce maintenance costs and promote DX through modernization

0

table of contents

  1. What is modernization?
  2. Background of the need for modernization
    1. Reduce IT maintenance costs

Ensuring mobility for DX promotion

  1. How to proceed with modernization
    1. Concept formulation
    2. Target system/priority analysis
    3. Choosing a migration method
    4. Architecture selection
  2. Points to note when modernizing
    1. Introducing a common platform is important
    2. Review of business operations was also carried out in parallel.
    3. Beware of operational complexity during transition period
  3. What is LogicMonitor that realizes operational DX?
  4. summary

What is modernization?

Modernization is a word that literally means “modernization,” but especially in the context of IT, it refers to updating the infrastructure environment such as the hardware and software of outdated business systems to maintain and strengthen a company’s competitiveness. It represents an initiative.

Modernization is an effort to update so-called “legacy systems” and develop efficient systems with the latest architecture. Legacy systems tend to have high maintenance costs, and often have architectures that lack mobility, such as requiring time to modify.

Operating and maintaining legacy systems is a huge burden for companies. In recent years, the issue of companies being overwhelmed with maintaining legacy systems and unable to secure financial and human resources for new investments has become an issue.

Background of the need for modernization

Why is modernization needed now? Below, I will summarize two points: cost and mobility.

Reduce IT maintenance costs

The first example is in the context of reducing IT maintenance costs. As pointed out in the Ministry of Economy, Trade and Industry’s DX Report*, approximately 80% of the IT costs paid by companies are spent on continuing existing business. Currently, this cost is a heavy burden for companies.

As the costs required for so-called run-the-business operations remain high, many companies do not have the capacity to invest in new investments, which is hindering Japanese companies from increasing their international competitiveness.

Ensuring mobility for DX promotion

Modernization is also necessary in the context of promoting DX. In order to promote DX, it is necessary to link data from existing systems and add functions to the system, but in general, the cost of renovating legacy systems is high and it takes time.

This is due to the complexity of legacy systems that have been operated and modified over many years. Even with simple fixes, there are various problems, such as the scope of impact being large, the cost of verification being high, and the lack of human resources capable of grasping the contents of complex systems.

Data utilization is also important in promoting DX, but in companies with a long history, useful data is often already hidden in legacy systems. On the other hand, data in legacy systems is not sufficiently standardized and requires name matching, and data formats and coding systems vary depending on the system. Modernization will also be an effective method for utilizing this data.

Against the backdrop of these challenges, the current situation is that companies are placing an emphasis on modernization.

How to proceed with modernization

So how should we proceed with modernization? Here, we will introduce the following four steps.

  • Concept formulation
  • Target system/priority analysis
  • Choosing a migration method
  • Architecture selection

Concept formulation

Modernizing individual systems individually will not lead to overall optimization. When implementing modernization, the first point is to create an overall concept.

Specifically, an enterprise-wide IT architecture needs to be defined. For example, one idea might be to use the cloud as a common platform to build an authentication environment, DWH, data lake, etc., while defining the contents of individual business applications for each system.

Utilizing data is the most important point in promoting DX. It is necessary to take full advantage of data governance when modernizing each system, taking into account so-called data governance aspects such as centralized data management methods and security aspects.

Target system/priority analysis

Modernization requires a certain amount of investment. It is not practical to migrate all of your company’s systems to the latest architecture. Therefore, it is necessary to prioritize the existing systems and then assemble the implementation order.

The key to setting priorities is to set and organize your considerations. For example, by analyzing existing systems on the axis of “frequency of use” and “frequency of modification,” the degree of importance can be determined.

Systems that are “infrequently used and infrequently modified” have a low priority for modernization, and in some cases, you may want to consider abolishing them. Additionally, for systems that are “frequently used but infrequently repaired,” you can choose to extend their lifespan as much as possible. On the other hand, for systems that are frequently used and repaired, early modernization can significantly reduce maintenance costs.

In addition, if there are areas that you would like to prioritize as part of your business strategy or processes that you would like to implement DX, we will take those into consideration in your priorities. Although it is not used often at present, if it is an area that you want to focus on as a business in the future, it will be a high priority.

Choosing a migration method

There are various options for migrating existing systems. Specifically, the details are as follows.

methodoverview
lift & shiftA method of migrating legacy systems to the cloud as is and optimizing them for the cloud environment in stages.
rewriteA method of converting a system built in a legacy language to a new language.
RearchitectureA method for rebuilding systems using the latest architecture.
RepurchaseA method of changing the products used, such as migrating a package system to SaaS.
RetainFor low-priority legacy systems, continuing current operations as much as possible is an option.
retirementFor systems that have a low number of users or are used infrequently, decommissioning them is an option.

From these options, it is effective to categorize them into patterns based on the usage status of the existing system, whether there are any issues, business conditions, etc. For example, if the system is scheduled to be reviewed or reformed to promote DX, etc., choose rearchitecture or repurchase. On the other hand, for systems that are not frequently repaired and are of low importance, a temporary lift and shift may be an option.

Architecture selection

Especially when updating an existing system through re-architecture, it is necessary to consider the architecture after the update. For efficient modernization, it is effective to use cloud services such as SaaS and PaaS. By utilizing these, it is possible to develop quickly to keep up with the business environment and new initiatives of competitors.

As a cloud-first, one idea is to set criteria for making decisions, such as choosing SaaS first if SaaS is available, using IaaS if it is not available, and using on-premises only if it is absolutely necessary to build an on-premises environment.

For applications that are a source of competitive advantage or a differentiating factor, we will also consider developing modern architectures on IaaS. On the other hand, for functions that have similar needs in each system, such as workflow, authentication, and archive functions, it is an option to use them as common services as described above.

Points to note when modernizing

Here are some things to keep in mind when modernizing.

Introducing a common platform is important

A key point in modernization initiatives is the consideration of common infrastructure. Traditionally, system development has often been individually optimized, leading to various problems such as frequent system coordination, suboptimal function placement, and duplication of functions and data.

By standardizing systems as much as possible on a common basis, a company can aim for an overall optimal system. As we have introduced so far, building a common cloud environment and providing common functions are also effective methods. Other possibilities include standardizing company-wide operations by introducing ERP, building a common data infrastructure by introducing DWH/data lake, and providing an environment that users can use such as BI, RPA, and low-code development platforms.

Additionally, it would be effective to introduce a common platform for system operations, such as a centralized operation monitoring tool and DR platform.

Review of business operations was also carried out in parallel.

When modernizing, there are many cases where system-side responses alone are insufficient. In particular, when introducing a system using SaaS or packages, it is essential to review your business operations. If you force customization without reviewing your business, the issue of “maintenance costs remaining high” will not be resolved.

In parallel with modernization, business operations should also be reviewed.

Beware of operational complexity during transition period

Due to cost and resource considerations, it is difficult to modernize all systems in one step. Therefore, companies will have to deal with a transition period.

Particular attention should be paid to the complexity of operational management. In addition to operational management of existing on-premises systems, it is also necessary to support operational management of newly added cloud environment systems.

In this way, there is a problem in that during transition periods, the burden on the operational department tends to increase. In order to respond to this situation, you should consider automating operational tasks, utilizing AIOps, and introducing operational management tools that are compatible with both on-premises and cloud computing as operational DX.

What is LogicMonitor that realizes operational DX?

In the transition period of modernization and the standardization of company-wide IT infrastructure, an integrated IT operations management tool that is compatible with all environments is required in order to efficiently proceed with operations management operations.

LogicMonitor is a SaaS-type IT integrated operation monitoring service. Centrally realizes operational monitoring of various systems, whether on-premises or in the cloud. 3,000 types of monitoring templates are available, covering various IT layers such as servers, networks, storage, OS, and containers.

LogicMonitor can be used as a tool to promote modernization and streamline increasingly complex operational tasks. LogicMonitor will be an effective option for proceeding with operational DX.

summary

In this article, we provided an overview of modernization and how to proceed with it. Although promoting modernization is necessary for companies, it is also necessary to consider the complexity of operations during the transition period. As you move forward with modernization, you should also consider operational improvements.

Solidigm SSDs’ Role in Advancing AI Storage

0
SSDs

As artificial intelligence advances rapidly to fuel the aspirations of humanity, computing power has had to grow as well. Fueled by high throughput, low latency networks, and deep learning models, thousands of GPU clusters are popping up everywhere. This evolving marketplace prompts deep reflection from AI architects. One of the most important questions is: what is the AI ​​storage infrastructure that can run AI accelerators (GPUs, CPUs, etc.) and network devices at full capacity without idle time?

Phases of an AI project cycle

An analysis of industry practices reveals that a typical AI project cycle consists of three main phases: 

  1. Importing and preparing data
  2. Model Development (Training)
  3. Model Introduction (Inference) 

The fourth phase (optional) may involve iterative refinement of the model based on actual inference results and new data. To understand the storage requirements for AI, it is essential to understand the nature of the primary input/output (I/O) operations in each phase and consider them collectively to form a comprehensive view.

Phase 1: Data Ingestion and Preparation

Before diving into training, it is important to thoroughly prepare the data that will be fed into the training cluster.

1. Data transformation: discovery, extraction, and preprocessing

The raw data used to create AI models inherits the traditional big data characteristics of the “3Vs”: Volume, Velocity, and Variety. The sources of data vary from event logs, transaction records, and IoT inputs to CRM, ERP, social media, satellite imagery, economics, and stock trading. Data from these diverse sources needs to be extracted and consolidated into a temporary storage area within the data pipeline. This step is usually called “extraction”.

Data is transformed into a format suitable for further analysis. In the original source systems the data is chaotic and difficult to interpret. Part of the goal of transformation is to improve data quality. These include: 

  1. Cleaning up invalid data
  2. Remove duplicate data
  3. Unit Standardization
  4. Organizing data based on type

During the transformation phase, data is structured and reformatted to fit a specific business purpose – this step is called “transformation.”

2. Data exploration and dataset splitting

Data analysts use visualization and statistical techniques to describe the characteristics of a dataset, such as its scale, volume, and precision. Through exploration, they identify and explore relationships between different variables, the structure of the dataset, the presence of anomalies, and the distribution of values. Data exploration allows analysts to dig deep into the raw data.

It helps identify obvious errors, better understand patterns in the data, detect outliers and unusual events, and uncover interesting relationships between variables. Once data exploration is complete, the dataset is typically split into training and testing subsets, which are used separately during model development for training and testing purposes.

3. Feature Extraction, Feature Selection, and Pattern Mining

The success of an AI model depends on whether the selected features can effectively represent the classification problem in your research. 

For example, consider individual members of Choice: characteristics include gender, height, skin color, education level, etc.

A notable difference is that, unlike the previous four dimensions, the smaller the vocal range, the more likely the dimensions will match (meaning the amount of data is much smaller) and be more accurate. 

To avoid the dangers of high dimensionality and reduce computational complexity, the process of identifying the most effective features to reduce the feature dimensionality is known as feature selection.

The process of uncovering the essential relationships and logic among feature sequences, such as which ones are mutually exclusive and which ones coexist, is called pattern mining.

4. Data Conversion 

The need to transform data may arise for a variety of reasons: this may be driven by a desire to align one data with other data, to facilitate compatibility, to migrate parts of the data to another system, to establish connections with other datasets, or to aggregate information within the data. 

Common aspects of data transformation include converting types, changing semantics, adjusting value ranges, changing granularity, splitting tables or datasets, transforming rows and columns, etc.

Thanks to a mature open source project community, there are plenty of reliable tools at your disposal for the data ingestion and preparation stages. These tools allow you to perform ETL (extract, transform, load) or ELT (extract, load, transform) tasks. Examples include:

  • Kafka
  • Sqoop 
  • Flume
  • Spark
  • Snow

Additionally, for tasks such as creating large sets of features, you can leverage tools such as:

  • Spark
  • Pandas
  • Numpy
  • Spark MLLib
  • scikit-learn
  • XGBoost

5. Storage characteristics suitable for data ingestion and preparation phase 

During the data ingestion and preparation phase, a typical workflow is to read data randomly and write processed items sequentially. It is essential for the storage infrastructure to provide low latency for small random reads while simultaneously achieving high sequential write throughput.

Phase 2: Model development and training

Once the training dataset is prepared, the next phase is model development, training, and hyperparameter tuning. The choice of algorithm is determined by the characteristics of the use case, and the model is trained using the dataset.

1. AI Framework 

The efficiency of the model was evaluated against the test dataset, adjusted where necessary, and finally deployed. The AI ​​framework is continuously evolving with the following popular frameworks:

  • TensorFlow
  • PyTorch
  • MXNet
  • Scikit Learn
  • H2O
  • others

At this stage, the demands on compute resources are very high, and storage is important because feeding these resources with data faster and more efficiently becomes a priority to eliminate idle resources.

During model development, datasets grow continuously and often need to be accessed simultaneously by many data scientists from different workstations, dynamically augmenting them with thousands of variation entries to prevent overfitting.

2. Storage capacity expandability and data sharing 

At this stage storage capacity starts to become important, but as the number of concurrent data access operations increases, scalable performance becomes the key to success. Data sharing between workstations and servers is an essential storage feature, along with fast and seamless capacity expansion.

As training progresses, the size of the dataset increases, often reaching several petabytes. Each training job typically involves random reads, and the entire process consists of many concurrent jobs accessing the same dataset. Multiple jobs competing for data access intensifies the overall random I/O workload.

The transition from model development to training requires storage that can scale without disruption to accommodate billions of data items, as well as fast multi-host random access, and especially high random read performance. 

Training jobs often involve decompressing input data, augmenting or perturbing the input data, randomizing the input order, and, especially in the context of billions of items, requiring enumeration of data items to query storage of lists of training data items.

3. Checkpoint Creation: Large Sequential Write Bursts

The sheer scale of training creates new demands: training jobs today can run for days or even months, so most jobs write periodic checkpoints to quickly recover from failures, minimizing the need to restart from scratch. 

Thus, the primary workload during training consists of random reads, which may be interrupted by large sequential writes during checkpointing. The storage system must be able to sustain the intensive random access required by concurrent training jobs, even during the burst of large sequential writes during checkpointing.

4. Summary of the model development phase 

In summary, developing an AI model is a highly iterative process, where successive experiments confirm or refute hypotheses. As the model evolves, data scientists use example datasets to train the model, often through tens of thousands of iterations. 

With each iteration, data items are augmented and slightly randomized to prevent overfitting, creating a model that is accurate for the training dataset but can also adapt to live data. As training progresses, the dataset grows, moving from the data scientist’s workstation to servers in a data center with greater computing and storage power.

Phase 3: Model deployment and inference

Once a model is developed, it is deployed and put into production. During the inference phase, real-world data is fed into the model, and ideally, its output provides valuable insights. Models are often continuously fine-tuned; new real-world data imported into the model during the inference phase is incorporated into the retraining process to improve performance.

1. Fine-tuning for real applications

Your AI storage infrastructure must operate seamlessly around the clock throughout the lifecycle of your project, so it must be self-healing to handle component failures and enable expansion and upgrades without downtime.

Data scientists need production data to fine-tune their models and explore changing patterns and goals. This highlights the importance of a unified platform, a single storage system that serves all phases of a project. Such a system gives development, training, and production easy access to dynamically evolving data.

2. Preparing the model for production 

Once a model produces consistently accurate results, it is deployed to a production environment. The focus then shifts from improving the model to maintaining a robust IT environment. Production can take many forms, such as interactive or batch-oriented. Continuous use of new data helps refine the model to increase its accuracy, and data scientists regularly update the training dataset while analyzing the model output.

The table below summarizes each phase of an AI project cycle and their respective I/O characteristics and associated storage requirements.

Phases of AII/O CharacteristicsStorage requirements影響Data capture and storageRead data randomly and write preprocessed items sequentiallyLow latency with few random reads, high sequential write throughputOptimized storage allows the pipeline to provide more data for training, leading to more accurate modelsModel Development (Training)Random Data ReadHigh sequential write performance for multi-job performance and capacity scalability, optimized random read and checkpointingOptimized storage improves utilization of expensive training resources (GPUs, TPUs, CPUs)Model Introduction (Inference)Random read/write mixSelf-healing capabilities to handle component failures, non-disruptive expansion and upgrades

If the model is continuously fine-tuned, the same functionality as in the training phase

Your business demands high availability, maintainability and reliability

Table 1. AI project cycles by I/O characteristics and subsequent storage requirements

Key Storage Characteristics for AI Deployments

AI projects that start as single-chassis systems during initial model development need to become more flexible as data requirements grow during training and more live data accumulates on the production floor. Two key strategies are employed at the infrastructure level to achieve high capacity: increasing individual disk capacity and expanding cluster sizes of storage enclosures. 

1. Capacity Increasing the capacity of individual disks and improving horizontal scalability of storage nodes are key factors. At the disk level, products such as the Solidigm D5-P5336 QLC SSD have reached capacities up to 61.44TB. At the storage enclosure level, the Enterprise and Datacenter Standard Form Factor ( EDSFF ) shows unparalleled storage density.

For U.2 15mm form factor drives, a typical 2U enclosure can accommodate 24-26 disks, providing up to 1.44PB of capacity. With the update to the E1.L 9.5mm form factor, a 1U enclosure can accommodate 32 disks, as shown in Figure 1. At 2U, the storage density is approximately 2.6x higher than a 2U U.2 enclosure. A comparison is shown in Table 2.

Form Factor60TB drives in 2U rack spaceCapacity per 2U rack spaceLegacy U.2 15mm241.47PBE1.L 9.5mm643.93PB

Table 2. 2U rack unit capacity based on drive form factor

What’s noteworthy is that high storage density in a single enclosure significantly reduces the rack space storage nodes take up, the number of network ports required, and the power, cooling and spare parts needed to operate them for the same capacity and manpower demands.

2. Data sharing function

Considering the aforementioned collaboration of multiple teams and the desire to train more data before distribution, the data sharing capability of storage is of paramount importance. This is reflected in the high IOPS, low latency, and bandwidth of the storage network. In addition, multipath support is essential to ensure that network services continue to operate even in the event of network component failure. Over time, existing networks have been consolidated into Ethernet and InfiniBand. InfiniBand has abundant data rates, excellent bandwidth and latency performance, and native support for RDMA. As a result, InfiniBand has become a powerful network to support AI storage. Currently, the most popular Ethernet bandwidths are 25Gbps, 40Gbps, and 100Gbps. NVIDIA also has products that support 200Gbps and 400Gbps with low latency. For east-west data flows between the network and storage, nodes are equipped with storage VLANs. NVIDIA also has products that support 200Gbps and 400Gbps RDMA.

3. Adaptability to Various I/O AI storage performance must be consistent across all types of I/O operations. All files and objects, whether they are a small 1KB item label or a 50MB image, must be accessible in roughly the same amount of time to ensure that the TTFB (time-to-first-byte) remains consistent.

4. Parallel Network File Operations AI projects demand efficient parallel network file operations for common tasks such as bulk copy, enumeration, and property modification. These operations greatly accelerate AI model development. Originally developed by Sun Microsystems in 1984, NFS (Network File System) remains the most popular network file system protocol today. NFS over Remote Direct Memory Access (NFS over RDMA) is particularly well suited for compute-intensive workloads that transfer large amounts of data. The data movement offload capabilities of RDMA reduce unnecessary data copies, improving efficiency.

5. Summary of Key AI Storage Characteristics AI storage solutions must provide sufficient capacity, robust data sharing capabilities, consistent performance across a variety of I/O types, and support for parallel network file operations. These requirements ensure that AI projects can effectively manage growing datasets and meet the performance requirements of AI model development and deployment. Finally AI development continues to exceed our lofty expectations. With the urgent need for computing giants to process more data at higher speeds, there is no room for idle processing time or power consumption. Solidigm offers drives in a variety of form factors, densities, and price points to meet the needs of various AI deployments. High-density QLC SSDs have proven their superiority in performance, capacity, reliability, and cost. 

From legacy configuration racks with TLC SSDs to new configuration racks with SLC and TLC SSDs and QLC SSDs.

Figure 1. Transitioning from a TLC-only solution to SLC/TLC+QLC.

Combining CSAL with Solidigm D7-P5810 SLC SSDs gives customers the ability to tailor their deployments for performance, cost and capacity.1 With an innovative full stack and open source storage solution, it is clear that Solidigm SSDs have unique advantages to accelerate advancements in AI storage.

Traditional Write Caching and Write Shaping Caching with CSAL

Figure 2. CSAL architecture

About the Author

Sarika Mehta is a Storage Solutions Architect at Solidigm with over 15 years of experience in the storage industry, focusing on optimizing storage solutions for both cost and performance by working closely with Solidigm’s customers and partners. 

Wayne Gao is a Principal Engineer in Storage Solutions Architect at Solidigm. Wayne has worked on the research and development of CSAL from Pathfinding through its commercial release at Alibaba.

Wayne is a member of the Dell EMC ECS All-Flash Object Storage team and has over 20 years of storage development experience, four US patent applications/grants, and is a EuroSys paper author. 

Yi Wang is a Field Applications Engineer at Solidigm. Prior to joining Solidigm, he held technical positions at Intel, Cloudera, and NCR. He is a Cisco Certified Network Professional, a Microsoft Certified Solutions Expert, and a Cloudera Data Platform Administrator.

Triple A game file sizes

0
game

Game developers are engaged in a perpetual arms race, competing in ingenuity to offer their fans the most technical and visually stunning adventures. Players’ desire for more immersive, more epic, and more spectacular experiences naturally leads to games that offer… more.

 But larger games mean larger files, which poses logistical problems, especially for console owners. Internal storage space is very limited, and even the most generous current-generation internal storage drives seem small compared to the size of the triple-A games coming out this year.

Did console developers Sony, Microsoft and Nintendo anticipate how quickly game file sizes would explode over the lifespan of their consoles and how difficult it would be for gamers to find their own storage solutions? Have computer manufacturers building gaming machines anticipated the storage and speed needs of today’s games?

Who says open world says storage needs

Bethesda’s Starfield is an example of this. Highly anticipated by Xbox and PC players eager for a new open-world (or perhaps open-galaxy) science fiction title, its size is estimated at 75 GB upon release. For comparison, Bethesda’s last open-world single-player RPG, Fallout 4 , weighed in at 30GB when it released in 2015.

Likewise, Baldur’s Gate III , developed by Larian Studios, is a gargantuan 150 GB. Larian Studios’ last comparable production was Divinity: Original Sin 2 in 2017, which had a PC file size of 60 GB. the growing field of open world games, Blizzard’s Diablo 4 on PS4, PS5, Xbox One, Xbox X/S and PC.

 While it’s true that it was released more than ten years ago, Diablo 3 was a big game for 2012 with its 25 GB. However, its sequel is even larger, with 80 GB.

An enthusiastic female gamer with headphones playing on her computer.

The Nintendo Switch is no exception to this phenomenon either. If its technical limitations and its tiny internal storage capacity (only 32 GB) require rationalization, even Nintendo strives to compete. 

Nintendo’s largest game ever, The Legend of Zelda: Tears of the Kingdom , will be 18.2 GB compared to its predecessor Breath of the Wild ‘s 14.4 GB in 2017. That’s more than half the storage internal of the Switch!

Games are getting bigger and bigger, regardless of genre

2 players hold controllers and compete in a fighting game on a monitor.  Their avatars brandish swords or ninja stars.  Player 1 has 5 hearts out of 5 compared to 4 for player 2.

This rapid expansion isn’t limited to open-world games. Capcom’s PS4, PS5, Xbox and PC remake of the iconic action/horror/survival game Resident Evil 4 weighs in at 68GB, while its 2021 installment in the franchise, Resident Evil Village , weighed in at just 35GB.

Even, Respawn Entertainment’s Star Wars Jedi: Survivor for PS5, Xbox, and PC is 130 GB in size, which is larger than its predecessor Star Wars Jedi: Fallen Order , a mere 43 GB, by almost 100 GB.

Even games in genres that you wouldn’t expect to require such large file sizes are affected by storage bloat. Street Fighter 6 , the latest in Capcom’s flagship fighting series, is estimated at 60 GB on PS4, PS5, Xbox and PC, while Street Fighter 5 weighed just 12.4 GB in 2016.

Why are modern games so large?

Opinions differ on why games from the PS5 and Xbox S/X era have such gargantuan file sizes.

 The prevailing view is that the electronics industry’s shift for televisions and monitors from 1080p to 4K resolutions has a lot to do with it.

 Higher resolution displays can display much more detailed textures: four times more detailed, in fact.

 Game developers want to use these details to create more impressive textures for their environment and character models. 

Because these textures are more detailed, they take up a lot more space. If we apply this principle to all game objects, we obtain a much larger game file. 

Although developers do their best to recycle textures, there are limitations.

While the amount of work devoted to textures has increased, the budgets allocated to games have not increased in the same proportions. 

Even though retail prices for triple-A games have increased, much to the dismay of gamers everywhere, it is not enough to offset the cost of additional labor.

 Also, industry insiders see another culprit: reduced retrenchment opportunities. When games were primarily distributed on physical media, such as CDs, DVDs, and cartridges, reducing the game’s file size had a clear financial benefit, as it reduced marketing/publishing expenses . 

Today, barely one game in twenty sold involves a physical medium. And even then, buyers have to download large files in order to install the data that will allow the disc or game card to play.

Granted, the storage costs for these large files run into the hundreds of thousands, if not thousands. million downloads, are considerable.

 But they make it possible to avoid even greater ancillary costs in terms of publishing and distribution. 

In summary, there is less financial incentive to compress and optimize file sizes. Add to this the enormous graphics workload that monopolizes the time available for project optimization, and the sharp increase in game file sizes is inevitable.

Finally, while probably a less important factor, it’s also worth noting that games implement larger, more complex dialogue systems to increase immersion and interactivity.

Increasing the quantity and sound quality of audio files also impacts file size. To return to the example of Starfield , Bethesda released Skyrim in 2012 with approximately 60,000 lines of voiced dialogue. Fallout 4 , in 2015, nearly doubled that total with 110,000. Starfield surpassed its predecessor with over 293,000 voiced lines of dialogue. All of these dialogues require audio files, not to mention the system to coordinate it all.

How to manage ever-larger game files

A Kingston FURY Renegade NVMe PCIe 4.0 M.2 SSD in a futuristic black motherboard illuminated by neon blue light.What about the players? The six aforementioned 2023 releases alone add up to over half a terabyte, which is way more than a base Xbox Series S can handle (with system files taking up about 30% of the 512GB of available internal storage). The PlayStation 5 would struggle to accommodate another triple-A game of this size on its internal storage, with 667 GB of its 825 GB drive being reserved for game files. Even the popular PC platform will struggle to hold games. games of this size. Gamers either get used to switching games on and off their console every month as new releases require old favorites to be deleted, or they invest in additional storage solutions.

Fortunately, there are plenty of storage options for gamers. Kingston’s FURY product line has been expressly designed to meet the technical demands of contemporary gamers. The Kingston FURY Renegade NVMe PCIe 4.0 M.2 SSD delivers exceptional speeds of up to 7,300 MB/s read and 7,000 MB/s write, for PS5 and PC gamers who want to run their games directly to from additional storage space. Its graphene-aluminum heatsink allows it to run at high intensity for longer with superior heat management, making it perfect for extended sessions of heavy use, like gaming. Also available with a heat sink, the SSD stays cool when the game heats up. It offers several capacities, from 500 GB to 4 TB, to adapt to the price and storage needs of each player. Finally, this SSD has the right speed and form factor to serve as the PS5’s internal SSD, meaning you can significantly expand your console’s storage capabilities without the need for peripherals.

For Xbox owners, it is not possible to run current generation games from external hard drives. However, that doesn’t mean Xbox gamers can’t use some extra storage space. External SSDs are great for playing games from previous generations of the Xbox, storing media files, or keeping current games that aren’t used regularly. The XS2000 external SSD uses USB 3.2 Gen2x2, delivering impressive read/write speeds of 2000 MB/s and ample storage space, even in a compact, portable form factor. Its capacities range from 500 GB to 4 TB. Note: Xbox Series S and Series X consoles do not have USB-C ports, so these gamers will need a converter.

Switch players also have decisions to make. This hybrid console has a single SD card slot to increase the storage capacity of the base console, which is tiny. The engineers behind the Switch clearly anticipated the fact that storage would be of increasing interest to owners of the console: it is capable of interfacing with SD cards of a maximum size of 2 TB, a capacity that only arrived on the market in 2023, six years after the console was released. The  Kingston Canvas Go! Plus microSD Card  is a great additional storage choice for Switch users: with a read speed of 170MB/s and a write speed of 90MB/s for larger capacity cards, and capacities ranging from 64GB to 512GB, it offers options for gamers of all types, from those who rarely use their Switch to those who use it as their primary or only gaming device.

As games continue to grow in size for a variety of reasons, the industry will adapt to the hardware demands they place on the systems that run them. Whether it’s internal M.2 SSDs, external SSDs or high-capacity SD cards, Kingston offers solutions that reduce the amount of administrative work associated with your games, leaving you more time to enjoy your passion .