Online PC Support

OPS Technical Solutions : +1(833)-522-1003

Welcome

PMC-Sierra Introduces Industry’s First Multi-Core, Multi-Threaded RAID Platform for x86 Servers

In an announcement made by PMC-Sierra, the company launched a new product maxRAID ™. This product is first in the family of BR5225-80 RAID adaptor, designed especially to boost the function of Solid State Disk (SSD) storage in a multi server environment (x86). maxRAID BR5225-80 is based on PMC-Sierra’s multi-core SRC 8x6G RAID-on-Chip (RoC) controller, which multi-tasks with multiple-users on a performance optimized RAID stack software.

In accordance to this, the new software will enable PCI Express(R) (PCIe(R)) 2.0 to connect to eight SAS/SATA ports, offering a speed of 6 GB/second, while targeting the data- and transaction-based applications in need of high IOPS (Input/Output Operations per Second) or throughput. Presently, maxRAID is known to deliver 136,000 IOPS per RAID controller. However, with the aid of PMC-Sierra’s 6 Gb/s SAS RAID solution, this limit will almost be doubled reaching out with 300,000 IOPS. In addition, PMC-Sierra’s RAID platform will also include maxRAID storage manager that will help users to manage the storage on their server efficiently from any standard browser within the company. This is a web-based geographical application and basically features an intuitive user interface.

In a joint development agreement, PMC-Sierra has collaborated with IBM to produce breakthrough RAID solutions. According to the agreement, PMC-Sierra will integrate their storage protocol controller with IBM’s RAID software. The results are already visible! PMC-Sierra has become one of the first in the industry to come up with a multi-processor or multi-core and multi-threaded RAID solution (maxRAID) that optimizes and boosts the SSD storage performance. This high performance RAID solution by IBM and PMC-Sierra collaboration works for x86 servers at one go.

Presently, the RAID solutions offered within the industry are only designed for Hard Disk Drives (HDDs), which are only a fraction of the performance given by SSD. The technique used ensures that the HDDs are short stroked, meaning that they use only a small amount of their total capacity to improve functions. This results in low disk utilization enhancing the costs for improved performance. The the new PMC-Sierra’s maxRAID BR5225-80 instead uses the multi-threaded and multi-core built. Further, this has been combined with IBM’s RAID stack, which gives the performance equivalent to and as advantageous as that of SSD. This not only saves cost, but also lowers the data power requirements while at the same time enhancing and ensuring best possible utilization of storage capacity.

Some of the features of PMC-Sierra’s maxRAID BR5225-80 x8 PCIe 2.0 to eight 6GB/s SAS RAID adapter are: provides maxRAID suite for management utilities, provides web based Graphic User Interface, RAID-on-Chip controller aids in providing access to all the chips with instantaneously and simultaneously increasing the I/O, three multi-threaded cores, provides boot support for BIOS and uEFI systems, 1.5 & 3Gb/s SATA and 3 & 6Gb/s SAS support, maxRAID storage manager management utility, ability to connect up to 8 SSD or HDD devices per adapter etc.

-Sierra is a widely known company, which provides internet infrastructure semiconductor solutions. Although the recent announcement has definitely created a buzz in the market, however, the effect it has on the market remains to be seen!

Web Servers Newest Sites consisting DDOS attacked

Imperva, an online security firm has unrevealed an astounding fact that they are being breached in a new manner by the release of large amount of bandwidth DDOS attack. This attack can take place because of experimental botnet.

The way it works is that a 40 line PHP script is sued to infect the servers. This attack can infect about more than 300 servers but only those that have some kind of vulnerability and loopholes in their security applications. This goes on to release high end DDOS attacks. The researchers of Imperva were the first ones to notice this kind of new age highjacking technique.

The Chief Technology Officer of Imperva, Amachai Shulman, said that web servers are the most potent of all hacking sites for hijackers. The reason behind this is that these websites don’t have antivirus software installed. They also go on to ensure ten to fifty times upload bandwidth for a usual home PC user. At a rough estimate, one would be able to gauge that at lease hundreds of servers have been actually infected by viruses and are getting repurposed for the launch of DDOS attacks.

Imperva has already sent out warning signals and has forewarned that DDOS attacks can be extremely dangerous to the web hosts and the companies managing hosts. This will be worldwide threat if not curtailed within specific geographical boundaries. Having said that, Shulman further reiterated, that companies should be constantly monitoring their presence on Google in order to unearth if they have been compromised or attacked in any way!

It’s very difficult to understand the release of DDOS attacks via web servers. The detection of the DDOS attacks is really hard as they are not in their conventional and traditional forms that one is used to. That means that the attacks go on to leave on off their guard.

Now, if you thought that trackbacks would be able to get you out of this problem then think again. Trackbacks are of little help as they are the ones that do not normally lead to the answer. All that you will be able to do is to find out some remote server that has been kept by some small hosting service provider.
Therefore, the addressing of DDOS is imperative as only then will the website be able to function effectively and efficiently without any compromise on their functionality. This will ensure that the website has enough security so that it can’t be breached and its position compromised.

This is certainly a serious matter and due importance should be given to it so as not to jeopardize the working of the website.

Gears 2 might have dedicated servers

Epic Games are going to use dedicated servers from now for Gears of War 3. People are talking about the trials that are at the root of the problem that is going to have an impact on Gears of War 2s multiplayer.

The latest development in the gaming industry is the use of dedicated servers. These dedicated servers are from Modern Warfare 2 and Rage that has become the topics of controversy during last year. John Carmack, the gaming legend is not the one to be found supporting dedicated servers. Gears of War 3, one of the most important console death match titles around, is nonetheless going to add these dedicated servers.

This is certainly an irony as the gaming legend; John Carmack is distancing himself from it and on the other hand you have Gears of War 3 all ready to incorporate it in its most important console death match titles. Gamers all over the world were certainly not expecting this and are waiting to watch the interesting event to unfold. This step will solve few problems faced by gamers. There is no doubt that the use of dedicated servers is going to make the mark of top quality gaming console titles. Gamers are excited about the best in class quality but are worried about the price.

CVG claimed that Gears of War developer, Epic Games is likely to make this move to dedicated servers at the time of the next installment series. The UK gaming site has observed that Epic Games need to do this in order to control online problems. These online problems have become a major concern and have brought on sluggish match making and a lag in the game. This is not the kind of gaming experience that customers want.

Therefore, one is not sure about it being just a rumor. But, before that you need to wait for the company to confirm the news. The positive angle to this is that when it actually turns out to be true then it is certainly going to be a time of celebration for the gaming community. Think of the way the gaming experience would be then?

The PC FPS titles will go on to be associated with match making and the major shooter will make the change over to dedicated servers. That would certainly be unlike anything that we are used to right now. There has already been trials run to check the program. This has been done with the intent of getting to the root of the problem. Therefore, all that gamers will have to wait and watch for what the company finally decides rather than getting excited about this rumor.

DELL & IBM taps ARM processors for LOW-power servers

DELL and IBM have agreed to make low powers servers using the multi core ARM Technology from Marvel Technology Group for possible lower power consumption.

DELL Inc. and IBM are set to test multi core ARM processors from Marvell Technology Group for possible use in low-power servers for large data centers. It has being said that the open source equation has being the main driving force for the two big companies to pioneer this idea. They have been sniffing with the idea and now they are close to test results.

Both the companies have placed an order of few thousand processors to the Marvel Technology Group. Marvell plans to deliver 40nm ARM-based server chips this year. They will offer low power x86 based processors. These servers will be Dell’s first computers based on non-x86 microprocessors. IBM also believes that ARM may be successful in delivering servers and also welcomes the usage of Linux operating systems in the server space.

According to a DELL executive, the preferences were on higher side for DELL to use this technology. He also said that while from a decade of general purpose processors to application-specific processors we need faster and low energy consuming servers. For that he said, we need low energy consuming processors  and people shall examine these machines and they will be more appliance oriented.

DELL’s Chief Technology Officer for Enterprise Products, Paul Prince: said that they have been all over the processor now. He said that DELL’s low power consuming notebook was hit for the change in the market and drastic consumer demand. He added by saying that these servers will serve applications like file servers within large data centers, in order to compete against low-power microprocessors, both Advanced Micro Devices and Intel Corp. He also added that  they are working on low-power x86 processors for embedded designs, which boast x86 performance, reliability, and 64-bit capability.  These processors are already been used by various other firms like Broadcomm, and many more on daily basis.

According to both the companies, the processor will make the server a huge hit amongst web hosting, web farms and light load infrastructures. These servers can be called as high density, low power and ‘ultra-light’ products.

James Dickerson, Director of ASIC Development at IBM Microelectronics: said that the processors shall drive of new force of energy efficiency. He said that as the customer wants a solution they shall provide it. He also added that IBM and ARM first began collaboration in March 1998. IBM plans to initially offer the RISC functions in a 0.18-micron process. Later on it has plans of incorporating fully-synthesizable cores in 0.13-micron and below feature sizes.

ARM technologies have engaged in to a fierce battle with Intel which is the largest processor maker in the world. ARM now plans to produce 40nm processor size as these processors will be used to help the production of super computers and improve many energy-related issues.

Shared computing- facts and effectiveness

The concept of shared computing has gained acceptance in the highly complicated multi taking environment of computing. It ensures high performance by incorporating a network of a computer system to work for achieving particular tasks. The work process is conducted via computers in the network that contribute their power of processing along with certain other essential resources to meet up the requirements. The effectiveness of shared computing is so high that its performance and power of processing are even better than a supercomputer. In today’s environment, it is too much of a task for one computer to perform all the work in an efficient manner. Therefore, shared computing is finding wide acceptance due to its usability and effectiveness.

Generating the benefits of unused resources

The computers are capable of utilizing huge resources for computing, but all those resources are never utilized to the fullest. Sometimes a computer stays on and remains idle because of no usage. Such unused resources are utilized to the fullest by the shared computing system.

How it works

In case of the conventional computing systems, same models of computers are used and all of those computers make use of the same operating system. In most of the cases, each application that is running on the system comes with a dedicated server of its own. In many other cases, hardwired connectivity is used for the entire network, which means the entire system components are getting connected with each other by the help of different hubs. Such traditional computing system is well designed and performs well.

If you use a shared computing system, it can be equally useful, but may not appear very well designed. It depends upon a software for establishing connection between different computers. Unlike the tradition form of computing, there is no need that all the computers in the system need to run on the same operating system. Shared computing can connect different computers that run on different operating systems. The network connections can be established by using local area networks, wireless area networks, hardwired networks or Internet. With shared computing, you can add additional resources more easily in comparison to the traditional systems.

The role of system’s software

The software that runs the shared computing system enables it to use the unutilized power of processing of every computer. In order to connect any computer in the system, it is important to install this software. The software must be capable of contacting the administrative server of the system to access data, observe the usage of the host computer so that the processing power can be utilized every time it is available. The software sends back the analyzed data to the server and gets the new data.

Shared computing needs standardization

Though shared computing is excellent to perform and solve several difficult problems, but it is not a standardized solution yet. Many people find it not effective or useful as it is very difficult to design or manage. Many users of shared computing depend on unique architecture, software or hardware. The experts are working on the betterment of shared computing.

Will Cloud Computing Be A Feasible Option For Midsize Enterprises?

Cloud computing has been the buzzword in recent times, due to its promising capabilities, and unsurpassable potential. However, the cost of delivering power to servers, and the server costs are two major concerns for IT operations, especially for all the midsize enterprises. So, the million dollar question of course is – ‘will cloud computing be a feasible option for the midsize enterprises and what will the main economic considerations be’; well, vice president of Amazon had something interesting to share with everybody.

In an extremely informative and influential presentation about data center economics, Hamilton gave a superb comparison between the economics for service provider, and an enterprise, though the focus was more towards the benefits shared by public cloud providers (no points for guessing Amazon) over enterprise data centers. So, let’s take a look at the key points covered by Hamilton that are applicable to one and all.

Large Computing Providers Have An Edge Over Others

Well, given the fact that large computing providers deal in large volumes, and hence they can offer better pricing. Taking this fact into account, server vendors always have inclination towards working with the big guns on custom designs; hence, they can offer the right combination of customization and pricing advantage, over the off-the-shelf components with the custom server designs.

And, it is quite evident that the average enterprises need to bear higher cost, and Hamilton estimated at the server, administration, as well as networking costs are about five times for the average enterprise than the cost, a large provider bears.

Server Costs & Cost Of Delivering Power To Servers.

More often than not, people tend to consider the cost of delivering power to the servers to be included within the server costs. However, as per some interesting stats and figures quoted by Hamilton, if you consider the economics in 10-year lifetime of a data center, the server costs account to about more than half of overall costs, while the power consumed by the servers is hardly ten percent. But, an astonishing fact is that when cost of delivering power as well as cooling these servers is also taken into account, it sums up to almost one-third of the total costs!

This clearly indicates that minimizing the cost of cooling equipments, and data center power can help in cutting down on operation costs big time!

Powering Off a server Isn’t The Way To Go!

Well, if you think you can power-off the server whenever it is not in use, and save a good deal of money, think again! Keeping the servers running all the time turns out to be economically more efficient than switching them off, when you feel that they’re not operational. But, of course, you need a huge customer-base of elite customers who’re willing to pay!

Quite clearly, it makes sense for IT vendors as well as customers to pay attention to the abovementioned aspects, and try to work things out, keeping these crucial aspects into consideration.

Call Now: +1 833-522-1003
Call Now: +1 833-522-1003
Call Now: +1 833-522-1003