[Date Prev][Date Next] [Thread Prev][Thread Next]
Re: [Pulp-list] Client Refactoring
- From: Jay Dobies <jason dobies redhat com>
- To: pulp-list redhat com
- Subject: Re: [Pulp-list] Client Refactoring
- Date: Mon, 25 Jul 2011 10:10:57 -0400
On 07/20/2011 04:59 PM, James Slagle wrote:
On Wed, Jul 20, 2011 at 02:01:16PM -0600, Jason L Connor wrote:
I take the "4 main goals" are the projected finished product of this
effort. I noticed the server proxy isn't mentioned as a separate
component. Do you see this as part of the API Library?
By server proxy do you mean what's in pulp.client.server? If so, then yes, I
had envinsioned that as part of the API library.
Under the API Library, what do you mean by "interactive mode" and what
will it buy us over using the python interpreter directly to explore the
That's basically what I meant; I'll make that clearer on the wiki page.
Basically, make what happens in pulp.client.cli.admin|consumer, happen as part
of a __main__, so that it's easier to get setup interactively. That way, you
wouldn't have to write a small script to set things like the config and server
up for you in order to explore the API.
I really like the model classes. They make a lot more sense that the
argument heavy api we have today. However, how do you plan to map the
model classes to the appropriate rest call? Will the models themselves
know about the url paths?
There's 2 ways how I see it could go.
First, you instantiate the model class, such as Consumer() and set the
appropriate fields. We could then call a create method on that instance, such
as consumer.create(). That method would have to make use of the API, know
which url to use, use the active server, etc.
Secondly, the instantiated model could be passed to the create method of an
instance of the ConsumerAPI(). This is similar to what happens today,
although instead of passing in a larger number of arguments, you pass in a
Actually, a third way would be a combination of the 2. The consumer
instance's create method acts more like a convenience method and uses the
ConsumerAPI().create, passing in itself as the model to create.
What about things like query calls? Are they then defined in the objects
themselves and each object exposes a number of query_* methods that
indicate the possibilties?
I'm not a huge fan of this approach, but more generally I'm not a fan of
the pure REST approach we've been taking either. It sounds great for
simple use cases (single resource, trying to create/delete it). But I
think it's ultimately going to be limiting in terms of developing an API
that's actually useful.
It's the things that cross "resource" boundaries that complicate it.
When adding a package to a repo, where does that fall? Is that .create()
in package? Is that .add_package() in repo?
We currently have a big issue in our API on how bad our status APIs are.
Everything is broken down into individual resources which just doesn't
make sense from a usefulness perspective. I have to get the repo, then
get it's sync list, then get the sync entry to get to actual useful
data. That's going to be somewhat annoying if we have to dig through an
object model to get at all that, as compared to providing an api call
that just says "get_latest_sync(repo_id)".
I also think this is going to start to fall apart when we start trying
to optimize queries. We're talking about absurdly large sets of data, so
we're going to need to take performance into account early on. If we tie
the client lib so tightly into an object model then I suspect we're
going to be shoehorning in advanced (cross-object, derived fields, etc)
I think I would prefer the first way. It would behave a lot like many ORM's
do. The model would have to know it's url, but there could be a lot of code
reuse from a base class, such as the create() method, which would be basically
the same for almost every model (you just hit a different url).
Under the admin client, what is the difference between a plugin
architecture and what we have today with the command and action classes?
Question also applies to the consumer framework.
Practically nothing. Moving to plugins would allow for custom commands to
support different types of content without having to update the admin/client
code itself. As part of plugin discovery/registration the commands and
actions would be added similarly to how they are each listed individually now
in the setup methods. One beneift would be if the community wanted to develop
a plugin to support a particular type of content that we didn't want to merge
into pulp master, for whatever reason.
One difference would be that as plugins, they may expose additional config
that would need to be read and merged into the main config.
Under the consumer framework, do you envision the same plugins to be
used by both the command line tools and by the gofer daemon?
I'm not 100% on this yet, but it could be that there is one gofer plugin that
acts as some glue code to map gofer messages to the discovered plugins that
have registered themselves in some way as to handle certain message types.
Using the same plugins in both gofer and the cli is a possibility, although
I'm not all sure what that would entail.
-- James Slagle
Pulp-list mailing list
Pulp-list redhat com
Freenode: jdob @ #pulp
http://pulpproject.org | http://blog.pulpproject.org
[Date Prev][Date Next] [Thread Prev][Thread Next]