[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[Pulp-list] Re: [et-mgmt-tools] Software Content Management (Introducing Pulp)



Hi there,

Answers inline.

Máirín Duffy wrote:
DISCUSSION

Many of the folks subscribed to these lists are seasoned Linux system engineers, system administrators, and/or release engineers for software content, so we would love to hear some of your thoughts on what problems areas you'd like to see addressed by free and open source management tools like Pulp. If you have any thoughts on the following topics or others that are related but maybe not mentioned here, please let's discuss them here and see if we figure out the best way to make Pulp useful for you!:

- Do you host internal mirrors of external content? What kind of content? How many mirrors? Do you have mirrors available for multiple geographic locations within your organization?

yes. I host mirrors of external yum repos. One mirror per external repo plus a testing and an interal repo. We use OpenAFS to host mirrors for different buildings.
- How many different 'upstream' sources of content need to be made available for systems at your organization? Hardware drivers from hardware vendors? Operating systems from OS vendors or from FOSS repos? Non-FOSS proprietary applications from application vendors? In-house application/software development teams?

channels: all RHEL5 channels, atrpms, dag, an misc/internal repo and a testing repo.
- How often do you pull down content ('sync' maybe could be a term) from these different upstream content sources?
when there are major updates, like RHEL5U1; when there are critical security releases or fixes for a bug that drastically impacts production, otherwise every 6-9 months for non-critical updates.

- How do you organize all of the software content that is delivered to your systems right now? What are the strengths you've found to your approach today? What are the weaknesses you'd like to address?

all rpms are in locally hosted yum repos. all machines can see all repos except for testing. a set of text files controls and cfengine control which rpms are installed. strengths: yum handles the dependencies. weakness: no clean way to keep different versions of packages in production or do a automated staged rollout of updates.

- How much customization/general 'mucking' do you do with the content you pull down from various sources? Are you more interested in simply making all the content available or do you have requirements for modifying/customizing it as well?

I usually leave them as-is. The major exception is recompiling the firefox 2 rpm from fedora for RHEL5
- If you do customize the content, to what extent do you need to do this? Branding? Localization? Etc.?

upgrading from firefox 1.5 to firefox 2.0.
- How strict are your policies for which systems have access to which kind of content? Is access completely open, is access constrained by which system owners have purchased licenses/entitlements to which content? Is access constrained by security concerns? Is access constrained by stability concerns (e.g., production systems must never be able to have development level content deployed to them?)

all systems can access all repos except for testing. testing is available, but disabled on all machines. This helps with staged rollouts
- What kind of requirements do you have for producing data about which systems had which content installed when, if any?

none.
- How many different environments do you manage content for? Do you manage content for development / qa / production environments?

two profiles of machines with one testing repo that can be enabled or disabled per machine.
- How do you prefer to deploy content to systems? Do you prefer to have a software management tool to do that or do you prefer to tie this into a configuration management tool?

I use cfengine to copy a list of files with rpm names and a scheduled command to run "yum install `cat /var/lib/rpmlist/*`"

- At what level of granularity do you perform software-management related tasks on your systems? For example, do you find yourself most often: - automatically selecting and deploying content to many systems at once in a uniform fashion
yes.

- automatically selecting and deploying content to smaller groupings of systems with carefully defined templates
  - manually selecting and deploying content to many systems at once
- manually selecting and deploying content to individual systems one-by-one
  What level of importance does each of these abilities have to you?

I need to be able to have a main profile and tweak an individual machine as needed without being clobbered by the main profile. Currently, I do this be having a list of rpms that must be installed and adding additional rpms in host-specific lists.


Sincerely,
Jason


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]