App-FargateStack
view release on metacpan or search on metacpan
- (3) You must log at least at the 'info' level to report
progress. This is set for you when your `plan` or `apply`.
- (4) By default an ECS service is NOT created for you by default
for daemon and http tasks. Instead, after creating all of the
necessary resources using `apply`, run `app-FargateStack
deploy-service task-name`. This will launch your service with a count
of 1 task. You can optionally specify a different count after the task
name.
- (5) You can tail or display a set of log events from a task's
log stream:
app-Fargate logs [--log-wait] [--log-time] start end
- --log-wait --no-log-wait (optional)
Continue to monitor stream and dump logs to STDOUT
default: --log-wait
- --log-time, --no-log-time (optional)
Output the CloudWatch timestamp of the message.
default: --log-time
- task-name
The name of the task whose logs you want to view.
- start
Starting date and optionally time of the log events to display. Format can be one
of:
Nd => N days ago
Nm => N minutes ago
Nh => N hours ago
mm/dd/yyyy
mm/dd/yyyy hh:mm::ss
- end
If provided both start and end must date-time strings.
- (6) The default log level is 'info' which will create an audit
trail of resource provisioning. Certain commands log at the 'error'
level to reduce console noise. Logging at lower levels will prevent
potential useful messages from being displayed. To see the AWS CLI
commands being executed, log at the 'debug' level. The 'trace' level
will output the result of the AWS CLI commands.
- (7) Use `--skip-register` if you want to update a tasks target
rule without registering a new task definition. This is typically done
if for some reason your target rule is out of sync with your task
definition version.
- (8) To speed up processing and avoid unnecessary API calls the
framework considers the configuration file the source of truth and a
reliable representation of the state of the stack. If you want to
re-sync the configuration file set `--no-cache` and run `plan`. In
most cases this should not be necessary as the framework will
invalidate the configuration if an error occurs forcing a re-sync on
the next run of `plan` or `apply`.
- (9) `--no-update` is not permitted with `apply`. If you need a
dry plan without applying or updating the config, use `--dryrun` (and
optionally `--no-update`) with `plan`.
- (10) Set `--route53-profile` to the profile that has
permissions to manage your hosted zones. By default the script will
use the default profile.
- (11) Deleting a task, daemon, or http service will delete all of
the resources associated with that task.
- For scheduled tasks you can disable the job from running instead of
deleting its resources.
- For services (daemons or HTTP services) you
can stop them or delete the service (`delete-service`) instead of
deleting all of the resources.
- These resources will **NOT** be removed:
- ECR image associated with a task
- An ACM certificate provisioned by App::FargateStack
- (12) This command will add a scaling policy to an HTTP, HTTPS or
daemon task. In order to apply the policy you must run `plan` &
`apply`. You provide the following arguments in order:
[task-name] metric-type metric-value [min-capacity max-capacity [scale-out-cooldown scale-in-cooldown]]
- `task-name` is optional if you only have 1 scalable task.
- `min-capacity`, `max-capacity` are optional and will default to 1 and 2 respectively.
- `scale-out-cooldown`, `scale-in-cooldown` are optional. If
you provided you must include the capacity paramters.
app-FargateStack apache requests 500 2 3 60 300
- (13) This command will add a schedule scaling action to your
configuration. In order to activate the schedule you must run `plan`
and `apply`. You provide the following arguments in order:
[task-name] action-name start-time end-time days scale-out-capacity scale-in-capacity
- `task-name` is optional if you only have 1 scalable task.
- `action-name` is a name for your schedule. It must be
unique within your entire configuration.
- `start-time` is UTC. The format for the staring time is
MM::HH. (Example: 00:18)
- `days` is the day or days of the week for the scheduled action.
_Note: Days should be one of MON,TUE,WED,THU,FRI,SAT or 1-7_
Example:
Scale out to 4 tasks at 10pm (EDT) for 30 minutes to run a batch job
on Friday night.
00:02 30:02 SAT 4/1 4/1
_Note that the cron specification is in UTC, hence we run at 2am for
30 minutes on Saturday morning in UTC._
- `end-time` time t scale back in. Same format as `start-time`
- `scale-out-capacity`, `scale-in-capacity` - These options
represent the scale out and scale in capacities.
For example, if your configuration defines an S3 bucket, the ECS task
role will be permitted to access only that specific bucket - not all
buckets in your account. The policy is updated when new resources are
added to the configuration file.
The task execution role name and role policy name are found under the
`role:` key in the configuration. The task role is found under the
`task_role:` key. Role names and role policy names are automatically
fabricated for you from the name you specified under the `app:` key.
## Task Execution Role vs. Task Role
It's important to understand that App::FargateStack provisions two
distinct IAM roles for your service. The Task Role, which is detailed
above, grants your application the specific permissions it needs to
interact with other AWS services like S3 or SQS. In addition, the
framework also creates a Task Execution Role. This second role is used
by the Amazon ECS container agent itself and grants it permission to
perform essential actions, such as pulling container images from ECR
and sending logs to CloudWatch. You typically won't need to modify the
Task Execution Role, as the framework manages its permissions
automatically.
[Back to Table of Contents](#table-of-contents)
# SECURITY GROUPS
A security group is automatically provisioned for your Fargate
cluster. If you define a task of type `http` or `https`, the
security group attached to your Application Load Balancer (ALB) is
automatically authorized for ingress to your Fargate task. This is a
rule allowing ALB-to-Fargate traffic.
[Back to Table of Contents](#table-of-contents)
# FILESYSTEM SUPPORT
EFS volumes are defined per task and mounted according to the task
definition. This design provides fine-grained control over EFS usage,
rather than treating it as a global, stack-level resource.
Each task that requires EFS support must include both a volume and
mountPoint configuration. The ECS task role is automatically updated
to allow EFS access based on your specification.
To specify EFS support in a task:
efs:
id: fs-1234567b
mount_point: /mnt/my-stack
path: /
readonly:
Acceptable values for `readonly` are "true" and "false".
## Field Descriptions
- id:
The ID of an existing EFS filesystem. The framework does not provision
the EFS, but will validate its existence in the current AWS account
and region.
- mount\_point:
The container path to which the EFS volume will be mounted.
- path:
The path on the EFS filesystem to map to your container's mount point.
- readonly:
Optional. Set to `true` to mount the EFS as read-only. Defaults to
`false`.
## Additional Notes
- The ECS role's policy for your task is automatically modified
to allow read/write EFS access. Set `readonly:` in your task's
`efs:` section to "true" if only want read support.
- Your EFS security group must allow access from private subnets
where the Fargate tasks are placed.
- No changes are made to the EFS security group; the framework
assumes access is already configured
- Only one EFS volume is currently supported per task configuration.
- EFS volumes are task-scoped and reused only where explicitly configured.
- The framework does not automatically provision an EFS
filesystem for you. The framework does however validate that the
filesystem exists in the current account and region.
[Back to Table of Contents](#table-of-contents)
# CONFIGURATION
The `App::FargateStack` framework defines your application stack
using a YAML configuration file. This file describes your
application's services, their resource needs, and how they should be
deployed. Then configuration is updated whenever your run `plan` or
`apply`.
## GETTING STARTED
The fastest way to get up and running with `App::FargateStack` is to
use the `create-stack` command to generate a configuration file,
inspect the deployment plan, and then apply it.
### Step 1: Create a Configuration Stub
First, generate a minimal YAML configuration file. The `create-stack`
command provides a shorthand syntax to do this. You only need to
provide an overall application name, a service type, a service name,
and the container image to use.
This command will create a file named `my-stack.yml` in your current
directory. Make sure you have your AWS profile configured in your
environment or pass it using the `--profile` option.
app-FargateStack create-stack my-stack daemon:my-stack-daemon image:my-stack-daemon:latest
This will produce a configuration stub that looks like this:
app:
name: my-stack
tasks:
my-stack-daemon:
image: my-stack-daemon:latest
type: daemon
This file contains the three key pieces of information you provided:
the application name, the task name, and the image to use.
### Step 2: Plan the Deployment (Dry Run)
Next, run the `plan` command. This is a crucial step that acts as a
dry run. The framework will:
- Read your minimal configuration file.
- Intelligently discover resources in your AWS account (like your VPC and subnets).
- Determine what new resources need to be created (like IAM roles, a security group, an ECS cluster and a CloudWatch log group).
- Report a full plan of action without making any actual changes.
- Update your configuration file with the discovered values and
sensible defaults.
app-FargateStack plan
After this command completes, your `my-stack.yml` file will be fully
populated with all the information needed to provision your stack.
changes, run the `apply` command. This will execute the plan and
create all the necessary AWS resources.
app-FargateStack apply
### Step 4: Deploy and Start the Service
The `apply` command creates all the necessary **infrastructure**, but
it does not start your service. This separation allows you to manage
your infrastructure and your application's runtime state
independently.
To create the ECS service and start your container, use the
`deploy-service` command.
app-FargateStack deploy-service my-stack-daemon
By default, this will start one instance of your task. To check its
status, use the `status` command:
app-FargateStack status my-stack-daemon
And to stop the service, simply run:
app-FargateStack stop-service my-stack-daemon
To restart a stopped service, run:
app-FargateStack start-service my-stack-daemon
## VPC AND SUBNET DISCOVERY
If you do not specify a `vpc_id` in your configuration, the framework will attempt
to locate a usable VPC automatically.
A VPC is considered usable if it meets the following criteria:
- It is attached to an Internet Gateway (IGW)
- It has at least one available NAT Gateway
If no eligible VPCs are found, the process will fail with an error. If multiple
eligible VPCs are found, the framework will abort and list the candidate VPC IDs.
You must then explicitly set the `vpc_id:` in your configuration to resolve
the ambiguity.
If exactly one eligible VPC is found, it will be used automatically,
and a warning will be logged to indicate that the selection was
inferred.
## SUBNET SELECTION
If no subnets are specified in the configuration, the framework will query all
subnets in the selected VPC and categorize them as either public or private.
The task will be placed in a private subnet by default. For this to succeed,
your VPC must have at least one private subnet with a route to a NAT Gateway,
or have appropriate VPC endpoints configured for ECR, S3, STS, CloudWatch Logs,
and any other services your task needs.
If subnets are explicitly specified in your configuration, the
framework will validate them and warn if they are not reachable or are
not usable for Fargate tasks.
### Task placement and Availability Zones
The framework places each task's ENI into exactly one subnet, which fixes
that task in a single AZ. A service can span multiple AZs by listing
subnets from at least two AZs.
What the framework does:
- Prefers private subnets
If private subnets are defined in the configuration, tasks are placed
there. If no private subnets are defined, the framework falls back to
public subnets.
- Aligns ALB AZs with task placement
When a load balancer is used, the framework enables the ALB in the same
AZ set it selects for tasks (best practice). This is for resilience and
to avoid unnecessary cross-AZ hops; it is not a hard technical requirement.
- Requires two subnets
The configuration must specify at least two subnets in different AZs.
If subnets are not specified, the framework attempts to discover them,
but still requires at least two usable subnets (either both private or
both public). If fewer than two are available, it errors with guidance.
Notes on internet access and ALBs:
- Internet-facing ALB
An internet-facing ALB must be created in public subnets. Tasks may (and
usually should) remain in private subnets behind it.
- Egress from private subnets
For image pulls and outbound calls, use either a NAT Gateway in each AZ
or VPC endpoints for ECR (api and dkr) and S3.
- Egress from public subnets
If tasks are placed in public subnets without endpoints or NAT, they
require `assignPublicIp=ENABLED` to reach ECR/S3.
## REQUIRED SECTIONS
At minimum, your configuration must include the following:
app:
name: my-stack
tasks:
my-task:
image: my-image
type: daemon | task | http | https
For task types `http` or `https`, you must also specify a domain name:
| Listeners: |
| - Port 80 |
| - Port 443 w/ TLS |
| + ACM Cert |
| (TLS/SSL) |
| [if external] |
+----------+----------+
|
+------v-------+
| Target Group |
+------+-------+
|
+-------v---------+
| ECS Service |
| (Fargate Task) |
+-------+---------+
|
+---------v----------+
| VPC Private Subnet |
+--------------------+
This default architecture provides a repeatable, production-ready
deployment pattern for HTTP services with minimal configuration.
## Behavior by Task Type
For HTTP services, you set the task type to either "http" or "https"
(these are the only options that will trigger a task to be configured
for HTTP services). The table below summarizes the configurations by
task type.
+-------+----------+-------------+-----------+---------------+
| Type | ALB type | Certificate | Port | Hosted Zone |
+-------+----------+-------------+-----------+---------------+
| http | internal | No | 80 | private |
| https | external | Yes | 443 | public |
| | | | 80 => 443 | |
+-------+----------+-------------+-----------+---------------+
_NOTE: You must provide a domain name for both an internal and
external facing HTTP service. This also implies you must have a
both a **private** and **public** hosted zone for your domain._
Your task type will also determine which type of subnet is required
and where to search for an existing ALB to use. If you want to prevent
re-use of an existing ALB and force the creation of a new one use the
`--create-alb` option when you run your first plan.
In your initial configuration you do not need to specify the subnets
or the hosted zone id. The framework will discover those and report
if any required resources are unavailable. If the task type is
"https", the script looks for a public zone, public subnets and an
internet-facing ALB otherwise it looks for a private zone, private
subnets and an internal ALB.
## ACM Certificate Management
If the task type is "https" and no ACM certificate currently exists
for your domain, the framework will automatically provision one. The
certificate will be created in the same region as the ALB and issued
via AWS Certificate Manager. If the certificate is validated via DNS
and subsequently attached to the listener on port 443.
## Port and Listener Rules
For external-facing apps, a separate listener on port 80 is
created. It forwards traffic to port 443 using a default redirect rule
(301). If you do not want a redirect rule, set the `redirect_80:` in
the `alb:` section to "false".
If you want your internal application to listen on a port other than
80, set the `port:` key in the `alb:` section to a new port
value.
## Example Minimal Configuration
app:
name: http-test
domain: http-test.example.com
task:
apache:
type: http
image: http-test:latest
Based on this minimal configuration `app-FargateStack` will enrich
the configuration with appropriate defaults and proceed to provision
your HTTP service.
To do that, the framework attempts to discover the resources required
for your service. If your environment is not compatible with creating
the service, the framework will report the missing resources and
abort the process.
Given this minimal configuration for an internal ("http") or
external ("https") HTTP service, discovery entails:
- ...determining your VPC's ID
- ...identifying the private subnet IDs
- ...determining if there is and existing load balancer with the
correct scheme
- ...finding your load balancer's security group (if an ALB exists)
- ...looking for a listener rule on port 80 (and 443 if type is
"https"), including a default forwarding redirect rule
- ...validating that you have a private or public hosted zone
in Route 53 that supports your domain
- ...setting other defaults for additional resources to be built (log
groups, cluster, target group, etc)
- ...determining if an ACM certificate exists for your domain
(if type is "https")
_Note: Discovery of these resources is only done when they are
missing from your configuration. If you have multiple VPCs for example
you can should explicitly set `vpc_id:` in the configuration to
identify the target VPC. Likewise you can explicitly set other
resource configurations (subnets, ALBs, Route 53, etc)._
Resources are provisioned and your configuration file is updated
incrementally as `app-FargateStack` compares your environment to the
environment required for your stack. When either plan or
apply complete your configuration is updated giving you complete
insight into what resources were found and what resources will be
provisioned. See [CONFIGURATION](https://metacpan.org/pod/CONFIGURATION) for complete details on resource
configurations.>
Your environment will be validated against the criteria described
below.
- You have at least 2 private subnets available for deployment
Technically you can launch a task with only 1 subnet but for services
behind an ALB Fargate requires 2 subnets.
_When you create a service with a load balancer, you must specify
two or more subnets in different Availability Zones. - AWS Docs_
- You have a hosted zone for your domain of the appropriate type
(private for type "http", public for type "https")
As discovery progresses, existing and required resources are logged
and your configuration file is updated. If you are **NOT** running in
dryrun mode, resources will be created immediately as they are
discovered to be missing from your environment.
## Application Load Balancer
When you provision an HTTP service, whether or not it is secure, the
service will placed behind an application load balancer. Your Fargate
service is created in private subnets, so your VPC must contain at
least two private subnets. Your load balancer can either be
_internally_ or _externally facing_.
By default, the framework looks for and will reuse a load balancer
with the correct scheme (internal or internet-facing), in a subnet
aligned with your task type. The ALB will be placed in public subnets
if it is internet-facing. You can override that behavior by either
explicitly setting the ALB arn in the `alb:` section of the
configuration or pass `--create-alb` when you run our plan and apply.
If no ALB is found or you passed the `--create-alb` option, a new ALB
is provisioned. When creating a new ALB, `app-FargateStack` will also
create the necessary listeners and listener rules for the ports you
have configured.
### Why Does the Framework Force the Use of a Load Balancer?
While it is possible to avoid the use or the creation of a load balancer
for your service, the framework forces you to use one for at least two
reasons. Firstly, the IP address of your service may not be stable and
is not friendly for development or production purposes. The framework
is, after all trying its best to promote best practices while
preventing you from having to know how all the sausage is made.
Secondly, it is almost guaranteed that you will eventually want
a domain name for your production service - whether it is an
internally facing microservice or an externally facing web
application.
Creating an alias in Route 53 for your domain pointing to the ALB
ensures you don't need to update application configurations with the
service's dynamic IP address. Additionally, using a load balancer
allows you to create custom routing rules to your service. If you want
to run multiple tasks for your service to support handling more
traffice a load balancer is required.
With those things in mind the framework automatically uses an ALB for
( run in 0.919 second using v1.01-cache-2.11-cpan-39bf76dae61 )