Apigee uses specific terms to describe key capabilities and concepts.
There is also structure at
the application level of the platform that should be understood.
In this lesson, we will focus on building understanding of those topics.
As we move through this lesson,
keep in mind the responsibilities of components discussed in the technology stack lesson.
Having a clear understanding of components responsibilities,
and relating that understanding to
the logical organizational structure described in this lesson,
is key for your work with Apigee.
Over the next few minutes,
we will focus on introducing some key concepts and terms used by Apigee.
APIs to allow you to provide access to resources.
On the slide, we will illustrate an example of a customer API.
An API implementation is called an API proxy.
An API proxy allows you to describe the request and response flow for a given API.
Policies are attached to these flows.
A policy allows you to intercept the flow and inject specific functionality.
Apigee Edge offers many out of the box and extension policies,
these can be used in the request and response flows.
Conceptually, the entry point for API calls to an API proxy,
is represented by a virtual host.
Virtual hosts are exposed on the router,
and allow you to direct requests to specific API proxies.
Often, an API may call a single back-end system.
Apigee Edge allows you to abstract back-end system URLs,
making it easy to move API proxies between environments without needing to modify code.
Target servers are used for this.
A target server allows you to describe a back-end resource,
and associate this definition to a name used by API proxies.
API proxies and API flows are subdivided into proxy endpoint and target endpoint,
as well as pre-flow,
conditional flow and post flow.
Day to day operations work usually does not use these concepts,
but it is important to understand that they exist because the operations team may,
at some point in time, provide requirements to API engineers.
Apigee Edge offers many policies.
It is relevant for operations to know about some of these.
Policies such as cache,
key value map, concurrent rate limit,
Spike arrest, message logging and statistics collector,
may apply to some operational requirements.
The policies are implemented by API engineers during API proxy development.
Operations will act as a stakeholder for applicable requirements,
and work with the API team for the implementation of specific capabilities.
On Apigee Edge, most policies are provided out of the box,
these policies are described in XML.
Developers simply configure the behavior on the policies.
Even though XML policies allow you to address a wide range of scenarios,
sometimes there is a need to implement custom behavior.
Apigee Edge provides extension policies that allow
API engineers to write custom code in JavaScript, Java and Python.
Most API proxies should contain mostly XML policies.
It is a red flag if you encounter an API proxy that contains only code.
Next, we will focus on describing key aspects of Apigee Edge organizational structure.
In Apigee terminology, a planet is what
most people will call their installed Apigee environment.
A planet is a collection of resources across one or more regions,
dedicated to Apigee Edge.
This includes all of the hardware,
virtual machines or cloud instances in a given Apigee installation.
Planets are subdivided into regions,
a region is typically represented by a data center or cloud region.
Pods represent a logical grouping of components by function.
We use pods to group components together for configuration and management.
Apigee Edge uses three pods,
gateway, central and analytics.
An organization is a logical boundary.
Organizations enforce logical data partitioning and security,
organizations are key to Apigee Edge multi tendency concepts.
An organization is a tenet,
Apigee Edge is a multi-tenet system.
This means you can create one or more organizations on the same physical infrastructure.
Environments can be visualized as working spaces within a given organization.
Typically, environments are mapped to an API SDLC.
Most customers create environments named dev,
QA and so on.
The relationship between organizations and environments is one too many.
Each organization can contain one or more environment.
Apigee Edge analytics data is partitioned by organization and environment.
Virtual hosts represent the entry point to an environment.
The relationship between environment and virtual host is one too many,
each environment can have one or more of virtual hosts.
Putting it all together, a planet can be visualized as follows.
On the diagram, we illustrate an Apigee planet with three regions.
The planet contains two organizations, A and B.
Each organization contains two environments,
A1 and A2, and B1 and B2 respectively.
Notice that organizations and environments expand across the planet in all regions.
This is an important concept.
The scope of organizations and environments allows you to execute
actions in a single location and influence the behavior of APIs,
the platform, and infrastructure components across multiple regions.
For example, deploying an API to environment A1 in region three,
will deploy the API to A1 in regions one and two as well.
This allows you to manage a distributed infrastructure from a centralized point.
As you decide how many organizations and environments to create, remember what they are.
Both organizations and environments are
logical boundaries which enforce data partitioning and security.
Two organizations share nothing.
APIs, API keys and other items within a given organization,
are not visible from others.
When deciding how many organizations and environments to create,
always assume an outside-in position.
Do not let your internal limitations,
challenges or preferences, drive what your customers see and use.
For most customers, one production organization and one production environment is ideal.
Having two or more organizations in production,
implies that API consumers looking to use organization A APIs and organization B APIs,
will require two sets of API keys.
If your objective is to provide a consistent company wide API offering,
two or more organizations will not help.
One will provide a better experience to API consumers.
Given the logical nature of environments,
an API deployment is no more than
a logical association between an API revision and the specified environment.
On Apigee Edge, APIs are owned by organizations.
Deploying APIs to an environment within
the same organization does not require code artifact movement.
We will explore API deployment in detail, later on.
This diagram illustrates the concepts of regions,
pods and servers, and their relationships to each other.
On Apigee Edge, it is possible to add and
remove logical and physical capacity from the system.
During installation, each server is registered with its corresponding pod.
Associations between servers and pods are managed using UUIDs.
During installation, a UUID is generated for each server.
These UUIDs are later used to register the server to the pod.
An API product bundles resources,
such as API proxies,
in order to provide a specific level of
access and functionality for client app developers.
And API product typically specifies a list of API proxies along with access limits,
the API key approval method,
and other configurations that should be in place for all of the bundle proxies.
API products are a key part of API management.
Elements such as quota,
and other characteristics of the API behavior,
are defined as part of API products.
API products allow delegation of API management to API product owners.
Changes to products can be implemented without
the intervention of API engineers or operations.
Apigee Edge provides finally,
grand role based access control.
Users, roles and permissions are created, and stored in LDAP.
Management of users and roles can be done through UI and management API.
During installation, a collection of default roles is created.
You can also create custom roles,
tailoring them to your specific security and management requirements.
No comments:
Post a Comment