Child pages
  • Contributions

Representation of capacity

Malcolm Betts ZTE

Draft 3 - 2019/03/05

History:

 

Date

Changes

Draft 0

2016/03/15

Initial version
onf2016.091_Representation_of_capaci.00.pptx

Draft 1

2016/04/15

Updates from discussion on 2016/03/31 conference call (see email sent 2016/03/31)
Converted for power point slides to a word document
onf2016.091_Representation_of_capaci.01.docx

Draft 2

2016/06/07

Changed admin context to “resource pool”
Posted as onf2016.091_Representation_of_capaci.02.docx

Draft 3

2019/03/05

Refresh/clean-up of draft 2
oimt2019.MB.001_representation-of-capacity.docx

1         Scope and objectives

  • This document is intended to identify and describe the “capacity” concepts that need to be represented in the information model. It does not propose a structure for the information model
    • For example, to simplify the description the figures show resource pools in the local/admin context and resource groups in the provider and client contexts as discrete entities – the information model should consider using a more compact/optimized representation
  • The SDN architecture and the Lifecycle states are used as a framework to describe the problem.
    • The capacity in the “potential busy” state is tracked, but it’s not clear if this is necessary.
  • The current version of the document
    • Uses link capacity as an example:
      • The model should represent “arbitrary” resource capacity
        • E.g. compute power, memory….
    • Represents resource capacity as a “simple integer” more complex (including multi-parameter) capacity types must be supported
      • E.g. CIR + EIR; maximum number of clients, label range….
    • “Simple” resource requests
      • Some requests may require several different types of resources
    • Describes only the “static” aspects
      • The model should support the addition of temporal (e.g. schedule) aspects
        • This may require tracking of capacity in the “planned” or “pending removal” state.

2         Background

2.1     ONF SDN architecture

Figure 2 of TR-521 SDN architecture version 1.1 shows the core functions of the architecture.

Figure 2-1: TR-521 Figure 2 – Core of the SDN architecture

TR-521 defines the contexts that exist within a SDN controller:

Client context

The conceptual component of a server that represents all information about a given client and is responsible for participation in active server-client management-control operations.

Server context

The conceptual component of a client that represents all information about a given server and is responsible for participation in active server-client management-control operations.

Administrator role: is described in section 5.2.1 of TR-521.

An administrator is characterized by having greater visibility and privilege than an ordinary client. Normally, an administrator would be a trusted employee of the same organization that operates the SDN controller (note). The administrator’s responsibility is to create an environment that can offer services, to modify the environment from time to time, to monitor the environment for proper operation, and to act on exceptions beyond the ability of the environment to resolve internally.

The administrator provides policy to orchestration and virtualization that defines the rules for: Allocation of resources to a client: Name translation between the server, local and client contexts: Abstraction (aggregation) of local resources into the resources that are presented to a client.

Resource groups: are described in section 5.4 of TR-521.

Any SDN service is built upon some set of resources, whose functions and interfaces are configured to the particular need. Resources may be physical or virtual, active or passive, and in many cases, may be created, scaled, or destroyed, by or at the behest of the client or the server. Resources available to SDN include VNFs, as defined by the ETSI NFV initiative [6].

The server contexts provide the local admin context with a view of the resource groups in the client context of the server controllers. From the perspective of describing capacity allocation it is convenient to aggregate of all of these resources into a local resource pool in the local naming context (this may require translation of the names from the server context to the local context). Under the direction of the administrator, orchestration allocates the resources from the local resource pool to clients, virtualization provides both name translation between the local context and the client context and abstraction. This results in population of the resource groups in the client contexts.

2.2     Lifecycle State

From onf2016.052_CIM_State_Model.02.docx.

This state is used to track the planned deployment, allocation to clients and withdrawal of resources. The following states are used:

  • Planned : The resource is planned but is not present in the network. Should include a “time” when the resources are expected to be installed.
  • Potential available : The supporting resources are present in the network but are shared with other clients; or require further configuration before they can be used; or both.
    • When a potential resource is configured and allocated to a client it is moved to the “installed” state for that client.
    • If the potential resource has been consumed (e.g. allocated to another client) it is moved to the “potential busy” state for all other clients.
  • Potential busy : The supporting resources are present in the network but have been allocated to other clients.
  • Installed : The resource is present in the network and is capable of providing the service.
  • Pending removal : The resource has been marked for removal. Should include a “time” when the resources are expected to be removed.

The lifecycle state can be observed in a client context and directly controlled by in the administrator.

3         Capacity Representation

Resource capacity can tracked using the following categories:

Local resource pool:

  • Dedicated to a client :
    • Installed:
    • Planned
    • Pending removal
  • Shared between clients :
    • Potential used (Installed and allocated to a client - tracked per client):
    • Potential available (Installed not allocated to a client - tracked per client):
    • Planned
    • Pending removal
  • Not allocated to a client
    • Installed:
    • Planned
    • Pending removal

Client context (per client)

  • Dedicated :
    • Installed:
    • Planned
    • Pending removal
  • Shared :
    • Installed (allocated to “this” client):
    • Potential available:
    • Potential busy:
    • Planned
    • Pending removal

Server context (per server context):

  • Installed:
  • Potential available
  • Potential busy
  • Planned
  • Pending removal

Notes:

  • The server context provides a local copy of the information in the corresponding client context (in the server)

The use of these categories is illustrated in the following examples. The first example in 3.1 shows the server (blue) local resource pool and three client contexts. The second example in 3.2 shows the case where the server is using shared resources. The third example in 3.3 shows the case where a number of server links are aggregated and abstracted as a single resource to the clients.

None of the examples show resources in the “planned” or “pending removal” categories.

3.1     Local resource pool and client contexts

Taking the example in figure 2-1 and looking at the admin and client contexts inside the Blue controller. If we take an example where the total capacity (in the admin context) is 150 units and 40 units are not (yet) allocated to any client.

The initial allocation to the client contexts is:

Client context

Dedicated

Shared

Installed

Installed

Potential

Busy

Blue

10

0

0

0

Green

30

0

40

0

Red

30

0

40

0

This is illustrated in figure 3-1 below:

 

Figure 3-1 Initial link resource pool and client contexts in the blue controller

Following the allocation of some of the shared capacity this becomes

Client context

Dedicated

Shared

Installed

Installed

Potential

Busy

Blue

10

0

0

0

Green

30

10

10

20

Red

30

20

10

10

This is illustrated in the figure 3-2 below:

 

Figure 3-2: Link resource pool and client contexts inside the “Blue” controller

3.1.1     Client use of shared capacity

Depending on the rules (policy) that the administrator provides to orchestration a client may be allowed to:

a)      Make a request to move some capacity from the shared potential category to shared installed category.

  • The client can then activate services that use these resources.
  • If the resources are no longer required the client makes an explicit request to move them from the shared installed category to the shared potential category.

Or:

b)      Make a request to activate a service using shared potential resources.

  • Orchestration moves these resources to shared installed and activates the service
  • When the client deletes the service orchestration moves the resources from shared installed to shared potential

Orchestration will reject any requests the do not conform to the policy for that client.

3.2     Server with shared resources

Considering the case where the Green client described in 3.1 is a SDN controller with clients Red-1 and Blue-1.

Figure 3-3: Green controller relationships

The blue server context in the green controller has the same information as the green client context in the blue controller.

Considering the case when the blue server context has:

 

Dedicated

Shared

Installed

Installed

Potential

Busy

Blue server

30

0

40

0

Then the green admin context may allocate these resources as follows:

Client context

Dedicated

Shared

Installed

Installed

Potential

Busy

Red-1

10

0

50

0

Blue-1

10

0

50

0

Note that the green administrator has allocated 10 units of the dedicated installed capacity from the blue server to the shared potential category for red-1 and blue-1 client contexts. This is illustrated in figure 3-4 below.

 

Figure 3-4: Shared server capacity – initial allocation

If the blue server allocates some of the shared capacity as in 3.1 above the blue server context has:

 

Dedicated

Shared

Installed

Installed

Potential

Busy

Blue server

30

10

10

20

This would be reflected in the client contexts as:

Client context

Dedicated

Shared

Installed

Installed

Potential

Busy

Red-1

10

0

30

20

Blue-1

10

0

30

20

This is illustrated in figure 3-5 below.

 

Figure 3-5: Shared server capacity – shared server capacity allocation

The 20 units of shared busy capacity from the blue server must be shown as busy to the clients of green. The 10 units of shared capacity that have been allocated to green will remain in the shared potential category for the red-1 and blue-1 clients. However, the Green admin context may now allocate these 10 shared installed resources to its clients without making a request to the blue controller. For example:

Client context

Dedicated

Shared

Installed

Installed

Potential

Busy

Red-1

10

20

10

20

Blue-1

10

0

10

40

This is illustrated in figure 3-6 below.

 

Figure 3-6: Shared server capacity – shared capacity allocated to clients

3.2.1     Client use of shared capacity

The options described in 3.1.2 also apply in this case with the additional restriction that if the capacity that is requested by the client is to be draw from the shared potential capacity of the blue server then the green controller must request allocation of the capacity (as described in 3.1.2) before responding to the request from its client.

3.3     Aggregation of server links

Considering the case where the resources in the blue admin context are provided by the gold server.

Figure 3-7: Blue controller relationships

If the gold server provides three resources the blue controller may aggregate them into a single resource as shown in figure 3-8.

 

Figure 3-8: Aggregation of server resources into a single resource

The aggregation of server resources abstracts away the details of the underlying resource. Care must be taken that important information is not hidden.

Note: This example has been constructed to expose some of the limitations of aggregated links.

In this example the dedicated installed capacity in the red client context is 30. However, the largest payload that can be accommodated is only 20 units, since the capacity of 30 is an aggregate of 20 from link 1 + 10 from link 3.

Also, neither client green nor client red can combine capacity from the dedicated and shared pools to support a single payload.

In general, if the maximum size of a client payload is variable (e.g. ODUflex or a packet client) then either:

  • The aggregated link must provide the client context with information about the maximum payload capacity, or:
  • The structure of the server links should be exposed to the client context (i.e. aggregation should not be used.

3.4     Other considerations

3.4.1     Categories selected in this document

Some dimensions of capacity can be derived from the information in the categories that have been defined.

For example total capacity in the admin context is the sum of the “not allocated” + “dedicated allocated” + “shared allocated” + “shared available”.

A “total capacity” category could be defined then for example “not allocated” is
“total capacity” – (“dedicated allocated” + “shared allocated” + “shared available”)

Depending on the business agreement between the server and its client it may not be necessary to include the “shared busy” category in the client context.

3.4.2     Presentation to client context

In the OTN an administrator may choose:

  • To show all links as a single pool of capacity that may be shared by any (size) ODU client.
  • Define a client context for each ODUk client, this allows the administrator to define the resources available to each of the OTN clients.

The admin context may choose to represent the capacity of a server to different clients in different terms.

For example, if the client of a 100G OTN link is

  • ODU2 then the capacity may be represented as 10 units
  • ODU0 then the capacity may be represented as 80 units

3.4.3     Multi-parameter expression of capacity

The capacity of a resource may be expressed using more than one parameter. These parameters should be included in the categories identified in this document:

For packet clients:

  • CIR and EIR or:
  • CIR and EIR and number of clients or:
  • CIR and EIR and labels available for allocation

Note: The labels allocated to a client may be either the data plane labels or an alias to the data plane label

In a compute environment capacity may be expressed as:

  • Processing power (MIPS), Memory (Gb), Storage (Gb) or
  • Processing power (MIPS), Memory (Gb), Storage (Gb) Data to be moved (Gb) in time (mS)

Note: Orchestration may need to marshal the resources from more than one server context to satisfy the client request

For OTN beyond 100G

  • Bandwidth and number of clients

Note: The minimum payload size is 1.25G, the minimum bandwidth that can be allocated by the server is 5G and the maximum number of clients is limited to 10 per 100G of server capacity

For OTN B100G

  • Server bandwidth < client bandwidth.
  • Note: For example, an ODUC2 client expects the server OTUC2 to provide 40TS. However, the server may be an OTUC2-30 which only supports 30 TS.