Red Hat 계정으로 회원 프로필, 기본 설정 및 고객 상태에 따라 다음의 서비스에 액세스할 수 있습니다.
아직 등록하지 않으셨습니까? 등록해야 하는 이유:
- 한 곳에서 기술 자료 문서를 탐색하고, 지원 사례와 서브스크립션을 관리하고, 업데이트를 다운로드 할 수 있습니다.
- 조직 내의 사용자를 보고, 계정 정보, 기본 설정 및 권한을 편집할 수 있습니다.
- Red Hat 자격증을 관리하고 시험 내역을 조회하며 자격증 관련 로고 및 문서를 다운로드할 수 있습니다.
Red Hat 계정으로 회원 프로필, 기본 설정 및 자신의 고객 상태에 따른 기타 서비스에 액세스할 수 있습니다.
보안을 위해, 공용 컴퓨터 사용 중에 Red Hat 서비스 이용이 끝난 경우 로그아웃하는 것을 잊지 마십시오.로그아웃
The last decade of technological advances has seen a race to reduce costs. Migration to virtualized systems quickly eclipsed traditional bare-metal deployments. At some point, virtualization will be out-paced by containerization. While the physical footprint of an organization’s compute resources may have been reduced, the complexity of managing those environments certainly has not.
Back in the Stone Age of IT operations and information security, everyone’s attention was focused on the corporate datacenter and the physical machines that lived there. It was simpler to understand where security controls needed to be applied. You had one giant cable coming into the building from "the internet," so you’d throw firewalls, Information Data Leak Prevention/Detection (IDP/IDS), proxies, load balancers and other tools in-line before that channel was split to the larger corporate network. This Castle-and-Moat model of protection worked fairly well (ignoring the insider threat) for decades.
Around the dawn of the new Millennium, to reduce costs and gain efficiency, Enterprises rapidly converted to virtualization. When that first happened, many infosec pros were initially confused ("What do you mean there is more than one server running on this computer?"), but gradually we all got with the program. As virtual machines became one of many places valuable data could be stored, we adapted (Deploy a virtual server? Well, change the virtual firewall’s rules to accommodate that. Wash, rinse, and repeat). Virtualization and eventually containerization quickly moved into the mainstream, and you rapidly had hundreds of virtual servers or thousands of containers running on a single system (don’t even get me started about clustering, high availability, and load balancing….).
Do clouds need moats?
Virtualization evolved into "the cloud". TL/DR for everyone out there: the cloud is just someone else’s computer. You used to run it on your server in your datacenter. Move it "to the cloud" and it now runs on Frank’s Discount Cloud and actually sits in his basement in Peoria, Illinois. Cloud-enabled individuals and businesses to have a low-cost means to quickly deploy systems and applications. It offered benefits around high availability and other features you’d typically see deployed in Enterprise-class organizations. Instead of ordering physical boxes from your favourite retailer or OEM and having that take weeks to be delivered and weeks more to be configured and deployed, now you call up Frank (say "Hi!" to his mom while she’s down in the server room doing Frank’s laundry) and Frank can have you up and running with computing and storage resources in minutes. Cloud lets you "outsource" a lot of technology and skills you might not have in-house (or have any interest in managing yourself).
In your traditional datacenter, it’s very reasonable to state most/all workloads running on a given system are known (especially in production). Your sysadmins and security teams can make choices about what runs where and what security controls need to be in place for systems that are more sensitive than others. Have you done the same or can you know the same information about something in the cloud?
In cloud deployments, you need to replicate your castles, moats and other desired security controls. Some hosting providers have a great number to choose from and subscribe to, others may not. Not all cloud providers have the same capabilities and control offerings. As you’re moving your workloads and your data to the cloud you REALLY need to understand what you’re deploying to.
For example, did Frank give you a dedicated physical server of your own right next to his toaster oven, or are you renting a virtual slice of CPU and storage that’s shared with Frank’s other customers? That might matter to you and your data.
The "old" IT security questions still need to be asked
If you’re just posting the cafeteria menu for your company out to the cloud, you probably don’t care if your salisbury steak recipe can be seen by outsiders (or Frank’s mom). But what if the cafeteria application you’re hosting in Frank’s Spiffy Cloud shares CPU, memory and storage with an unknown number of random other customers? What if it also takes credit cards so your patrons can easily use their debit cards for the BOGO (Buy One, Get One) tater tots that are served with that yummy Salisbury steak?
Does that make your Spidey-sense tingle? Does your right eye twitch just a little bit thinking about it? If you’re doing that, you know that things like the PCI-DSS apply and that you’ve got to invest more into the security and isolation of that system, and random hangers-on are NOT welcome.
Moving to the cloud means you need to refine your risk assessment and management practices. Things that may have been OK on-site are really, really BAD ideas out in a multi-tenant environment where you have little (or no) visibility and control. Frank’s a great guy and all, but:
How thoroughly is he vetting his customers and lumping them together onto systems in his basement (or did he expand out into the garage now? The cloud is booming, you know!)?
How technically adept is Frank? His price is right, but larger, more established cloud providers have deeper benches of people trained in technology and security practices.
How quickly can Frank adapt to emerging problems like Spectre, Meltdown, or MDS?
Do you really feel confident he’s able to isolate your data enough that a sophisticated attacker couldn’t clandestinely view your data?
Extend and evolve but also understand the new world of cloud-y IT security
Now, this isn’t intended to be FUD (Fear Uncertainty, and Doubt) around "the big bad cloud", but it IS a call to action to prepare yourself, understand WHAT you need to do to protect your data no matter WHERE it is.
Take the time to understand where your important systems and data are, what controls you need to have around them and where you might not want to share resources that could covertly leak away that data.
You might need to make changes, like turning off Hyper-Threading for multi-tenant servers where you cannot guarantee that trusted and untrusted workloads are isolated from each other.
Understand what capacity you have in your physical systems or in your cloud farm so that IF you take the path of being the most secure and shut off Hyper-Threading that you have enough resources to handle your current and expected workloads.
Think about how you need to manage your risks and make decisions that protect you, your company, and your customers.
Red Hat Product Security helps provide you with clear, accurate descriptions of vulnerabilities so you can decide what actions you need to take and how quickly you need to react to media-grabbing events. Understanding a problem helps you better able to deal with it within the constraints of your unique environment, be it traditional, the leading edge in Frank’s Spiffy Cloud or wherever you choose to host your work!
Chris Robinson is manager, Product Security Assurance at Red Hat.