Red Hat 계정으로 회원 프로필, 기본 설정 및 고객 상태에 따라 다음의 서비스에 액세스할 수 있습니다.
아직 등록하지 않으셨습니까? 등록해야 하는 이유:
- 한 곳에서 기술 자료 문서를 탐색하고, 지원 사례와 서브스크립션을 관리하고, 업데이트를 다운로드 할 수 있습니다.
- 조직 내의 사용자를 보고, 계정 정보, 기본 설정 및 권한을 편집할 수 있습니다.
- Red Hat 자격증을 관리하고 시험 내역을 조회하며 자격증 관련 로고 및 문서를 다운로드할 수 있습니다.
Red Hat 계정으로 회원 프로필, 기본 설정 및 자신의 고객 상태에 따른 기타 서비스에 액세스할 수 있습니다.
보안을 위해, 공용 컴퓨터 사용 중에 Red Hat 서비스 이용이 끝난 경우 로그아웃하는 것을 잊지 마십시오.로그아웃
The importance of data is not hidden from anyone. Data is the raw material of business – an economic input almost on a par with capital and labor for the "information age." Organizations are no longer suffering from a lack of data, they’re suffering from a lack of the right data. In today’s data-driven world, it is not only analytical applications that need to access data from diverse sources, but operational and transactional applications and processes as well. Business leaders need the right data in order to effectively define the strategic direction of the enterprise.
The reality is that data in most organizations is distributed across multiple operational and analytical systems, including Apache Hadoop, relational databases, and NoSQL stores such as MongoDB. With social media, cloud applications and syndicated data services leading to expanding volume, variety and velocity of data, many organizations are realizing that physical consolidation or replication of data is not practical for all data integration and business agility needs.
It’s time to start treating your data less as a warehouse and more as a supply chain. Having identified your sources of data, you must corral it for analysis, in the same way that the various components come together on an assembly line. Recognize that the data won’t be static—it will be manipulated as it goes through the supply chain, added to other pieces of data, updated as more recent data comes along, and transformed into new forms as you look at different pieces of data in aggregate.
To build an effective data supply chain, a better integration solution is required that will deliver any combination of data, to any application, at any time, in any form needed. Red Hat JBoss Data Virtualization enables organizations to deliver timely, actionable, and unified information through lean integration of data spread over multiple applications and technology silos, making all the data easily consumable by people who need it the most to advance the business.
Red Hat JBoss Data Virtualization enables agile data utilization in 3 easy steps:
- Connect: Simultaneously access data from multiple, heterogeneous data sources, such as Apache Hadoop and MongoDB;
- Compose: Easily combine and transform data into reusable, business friendly virtual data models and views;
- Consume: Make unified data easily consumable through open standards interfaces.
The simplicity offered by data virtualization software enables organizations to acquire actionable, unified information when they want and how they want—that is to say, at the speed of business—for enlightened business execution and to adapt to changing business demands. Combined with ease of development, data virtualization software supports a range of IT projects and initiatives, including:
Self-Service Business Intelligence: The virtual, reusable data model provides business-friendly representation of data, allowing the user to interact with their data without having to know the complexities of their database or where the data is stored, and allowing multiple BI tools to acquire data from a centralized data layer.
Unified 360◦ View: Deliver a complete view of master and transactional data in real-time. The virtual data layer gives a unified, enterprise-wide view of business information that improves users’ ability to understand and leverage enterprise data.
Agile SOA Data Services: A data virtualization layer delivers the data services layer to SOA applications. Data virtualization shortens the creation of data services that encapsulate the data access logic and allows multiple business services to acquire data from the centralized data layer and provide loose coupling between business services and physical data sources.
Better Compliance Control: The data virtualization layer delivers data firewall functionality. Data virtualization improves data quality via centralized access control, robust security infrastructure and a reduction in physical copies of data, thus reducing risk. Furthermore, the metadata repository catalogs enterprise data locations and the relationships between the data in various data stores, enabling transparency and visibility.
Data-driven enterprises can maximize the business value of their data by establishing the organization, processes and infrastructure necessary to manage their data as a dynamic supply chain, providing for relevant, trusted data to be delivered quickly when, where and how it is needed to support the changing needs of the business and allow it to thrive in the information-driven economy.