Client applications are first-class citizens in enterprise systems. They increasingly run in containers next to the services they call.

This guide demystifies the Jakarta EE Application Client Container (ACC), shows when to use it (or not), and gives you concrete steps to package, secure, network, and operate a client-in-container with confidence.

Overview

If you run Java client code that consumes EJBs, JMS, or secured HTTP services, you need a clear path to containerize it without breaking authentication, naming, or performance.

This article explains the ACC’s role and compares it to web and desktop/CLI clients. Then it gets practical with vendor-specific how-tos, mTLS/OIDC/Kerberos patterns, Docker and Kubernetes networking, performance tips, troubleshooting, and compliance.

You’ll find precise commands, file names, and decisions that translate directly to your pipelines.

For foundational context, consult the current Jakarta EE Platform specifications. For networking behavior and cluster DNS, see the official Docker networking documentation and Kubernetes Services and DNS.

Security guidance aligns to RFC 8446 (TLS 1.3) and RFC 6749 (OAuth 2.0). Hardening and compliance reference the CIS Docker Benchmark, CycloneDX, and Sigstore Cosign.

What is the Application Client Container (ACC)?

The Application Client Container (ACC) is the Jakarta EE runtime that executes a packaged client module. It provides container services such as dependency injection, security context propagation, and JNDI access to that client.

Unlike a web container (servlets/JSP) or EJB container (server-side business logic), the ACC runs on the client side. It is designed to interact with remote enterprise services as a managed client.

In practice, an ACC runs a client JAR with an application-client.xml descriptor and optional annotations. It bootstraps container services locally so you can perform portable JNDI lookups, inject resources, and authenticate via the platform security APIs.

If you only need HTTP/REST or raw JMS from a plain Java application, you may not need a full application client container. If you rely on remote EJBs and Jakarta EE resource injection, the ACC provides a standardized client-side environment.

ACC architecture and lifecycle

An ACC wraps your client main class with a lightweight container that initializes naming, injection, security, and lifecycle callbacks. This matters when you migrate Java EE-era clients to containers or modernize to Jakarta EE. It defines how client code discovers services and presents credentials.

At startup, the ACC processes application-client.xml and initializes a CDI-like bootstrap if supported. It sets up JNDI InitialContext with provider-specific factories and configures security realms.

For example, GlassFish/Payara ACC initializes RMI-IIOP naming to port 3700 for remote EJB discovery. WildFly clients typically use HTTP remoting for EJB invocations. The client then runs your main class with container-provided resources wired in.

Be aware of limitations. Some servers don’t ship a full ACC, and injection semantics may be narrower than on the server. Favor portable JNDI names and explicit configuration.

If you don’t need ACC features (for example, you only call REST), a plain Java SE client is simpler and often faster to start.

Packaging and running a Jakarta EE application client (GlassFish/Payara/WildFly/Open Liberty)

Packaging a Jakarta EE application client centers on a client JAR with META-INF/application-client.xml and a manifest Main-Class. Running differs by vendor. GlassFish/Payara provide an appclient launcher. WildFly offers remote EJB and naming clients without a full ACC. Open Liberty focuses on REST/JMS-style clients rather than a classic ACC. Choose the path that matches your remote APIs and support policy.

The portable baseline is consistent. Ensure your client JAR includes dependencies or is launched with a vendor-provided client runtime. Define resource references in application-client.xml when needed. Set JNDI and security properties via a configuration file or environment variables.

Start with the simplest run command. Then add secrets, truststores, and discovery properties as you harden.

GlassFish/Payara: appclient tool, application-client.xml, JNDI names

For GlassFish and Payara, the appclient tool executes the application client container locally. It connects to the server for remote EJBs and resources. This is the most direct route if your client relies on EJB remotes and container-managed injection.

Tip: keep JNDI names portable and prefer TLS-secured IIOP if available. Test with and without appclient to decide whether you need full ACC semantics or a simpler Java SE client suffices.

WildFly: EJB client configuration and remote naming

WildFly doesn’t ship a classic ACC but provides a robust EJB Remote client over HTTP remoting. This is ideal when you want lightweight Java SE clients that still invoke remote EJBs with security and pooling.

Tip: instrument and log the EJB client discovery at DEBUG on first run. Mis-typed module or bean names are the most common cause of NameNotFound and NoSuchEJB exceptions.

Open Liberty: application client support and configuration

Open Liberty emphasizes lightweight, cloud-native runtimes and does not provide a traditional ACC. Remote EJB invocations are not part of Liberty’s EJB Lite profile. Instead, prefer HTTP-based integration.

Tip: if you require full ACC semantics or remote EJBs, consider server distributions that still ship an ACC (e.g., GlassFish/Payara). Or migrate your contracts to HTTP/gRPC to stay aligned with Liberty’s strengths.

ACC vs web clients vs desktop/CLI clients

Choosing between an ACC, a web browser client, or a desktop/CLI client depends on APIs, security, and operational surface area. ACCs shine when your client already depends on remote EJBs and you want container-managed naming and security. Web clients excel for UI and broad reach. Desktop/CLI clients win for operator workflows and offline capability.

An ACC reduces client code for JNDI and security but ties you to vendor runtimes and older protocols like RMI-IIOP or HTTP remoting. A web client integrates over REST or WebSockets and shifts state to the server, which simplifies distribution but requires server endpoints. A desktop/CLI client in Java SE is lightweight, testable, and easy to containerize, but you’ll write more glue for discovery, resilience, and authentication.

Pragmatic rule: if you’re starting fresh, prefer REST/gRPC clients. Reserve ACC for legacy EJB interop or where the benefit of container-managed features outweighs added footprint and vendor coupling.

Decision framework: containerized client vs native vs virtual machine

Running a client in Docker, on the host, or in a VM is a trade-off across latency, startup, resource isolation, operational complexity, and compliance. Containerizing a client simplifies distribution and policy enforcement but introduces extra networking layers and image hygiene duties.

Use this short decision checklist:

Example benchmarks to anchor expectations (methodology: same hardware, Java 17, TLS 1.3, 10k RPCs, container cgroup limits off):

Treat these as directional. Measure your workload under realistic TLS, GC, and connection pooling settings before locking decisions.

Secure authentication for client containers (mTLS, OIDC, Kerberos)

Headless clients need strong, automatable authentication that works in containers and scales with rotation and revocation. mTLS is excellent for service-to-service identity. OAuth2/OIDC fits HTTP APIs. Kerberos/JAAS remains common in legacy AD or SPNEGO environments.

Start with the protocol your servers already support and centralize secrets management. Across methods, pin to current protocol versions, isolate secrets with read-only mounts, and automate expiry-driven refresh.

TLS 1.3 reduces the full handshake to one round trip and removes obsolete ciphers, tightening security and latency (per RFC 8446 (TLS 1.3)). Validate time sync and DNS early. Both silently break authentication flows under containerized networking.

mTLS with Java keystores (JKS/PKCS#12) and certificate rotation

Use client certificates when you need mutual authentication without interactive logins. Java supports JKS and PKCS#12. PKCS#12 is preferred for interoperability.

Tip: prefer short-lived client certs (hours to days) issued by an automated CA. Shorter lifetimes reduce blast radius without operator overhead.

OAuth2/OIDC client credentials for service accounts

When calling HTTP APIs, OAuth2 client credentials provide scoped tokens with auditable lifetimes. They fit non-interactive containers and integrate well with gateways. The client credentials grant is defined in Section 4.4 of OAuth 2.0 (see RFC 6749 (OAuth 2.0)).

Tip: minimize scopes. Split clients by environment or service to reduce lateral blast radius if a secret leaks.

Kerberos/JAAS configuration in containers

Kerberos remains common in AD-backed enterprises and for SPNEGO-authenticated HTTP or JMS. Containers must supply krb5 and JAAS configs and maintain tight time sync.

Tip: isolate keytabs per service identity and rotate them alongside SPNs. Never bake them into images.

Networking patterns for client-to-server traffic in containers

Your client’s reliability depends on predictable name resolution, routing, and TLS termination. Docker’s default bridge and host networking have different DNS and MTU characteristics. Kubernetes adds cluster DNS and Services that shape discovery.

Choose the simplest mode that meets your security and observability needs. Plan for TLS termination points (client direct to server vs via a sidecar or gateway). Account for NAT effects on MTU. Standardize on health and readiness checks that reflect actual upstream reachability, not just local process aliveness.

Docker bridge vs host networking and DNS/service discovery

Docker bridge networks provide isolation and built-in DNS for container names, while host networking trades isolation for lower overhead and direct access.

Tip: test with dig, getent hosts, and path MTU probes before production rollout. These quickly reveal DNS search path or fragmentation issues.

Kubernetes Services, DNS, and sidecar/proxy patterns

In Kubernetes, clients typically call Services and use cluster DNS for names. Sidecars add mTLS and policy without changing client code.

Tip: validate that your client honors DNS TTLs. JVMs cache DNS unless configured via the networkaddress.cache.ttl security property.

Performance and benchmarking considerations

Performance hinges on upstream latency, TLS handshake cost, and JVM warmup. Containers add minimal overhead when configured correctly. Establish a repeatable benchmark that reflects your client’s call patterns and security posture.

A simple methodology: fix hardware and JVM version, disable CPU frequency scaling, and use TLS 1.3 with session resumption. Run 3 warmup minutes, then 5 measurement minutes with p50/p95 latencies, CPU, memory RSS, and GC metrics. Compare native vs Docker bridge vs your orchestrator. Track cold-start time to first successful authenticated call.

Tuning tips:

Dockerizing a Jakarta EE application client

A containerized client is reproducible and easier to secure, but you must handle classloading, secrets, and environment-specific config cleanly. Start from a slim JRE base, run as non-root, and externalize keystores and endpoints.

A practical pattern:

Tip: never bake secrets or keystores into the image. Mount them at runtime and scope file permissions to the runtime user only.

Persistent state and offline operation

Many clients need a durable cache or queue to survive restarts or intermittent networks. In containers, use volumes for persistence, encrypt sensitive data at rest, and design sync strategies that reconcile safely when connectivity returns.

Persist only what you must: local credentials, last-seen offsets, idempotent request journals, or a small embedded database. Mount a named volume (e.g., --mount type=volume,src=client-data,dst=/var/lib/myclient) or a hostPath/PersistentVolume in Kubernetes.

Protect sensitive files with OS permissions and application-level encryption. If the host or node is untrusted, add disk encryption at the platform layer.

For offline mode, apply a write-ahead log or outbox pattern and reconcile with at-least-once semantics. Bound caches by size and age. Surface a health endpoint that distinguishes “ready but offline” from “fully synced,” so orchestrators don’t thrash healthy but disconnected pods.

Troubleshooting connectivity and TLS issues

Most client-in-container outages reduce to DNS, MTU, clock skew, or certificate trust problems. A fast, repeatable playbook gets you from symptom to root cause.

Tip: capture a short tcpdump within the container namespace to see SYN, MSS, and TLS alerts. It often short-circuits guesswork when facing MTU or SNI mismatches.

Compliance, SBOMs, and image signing

Regulated environments expect you to know what’s in your container and to prove its provenance. SBOMs and signatures, together with baseline hardening, deliver that assurance.

Generate an SBOM during build (for example, CycloneDX via your build tool) and attach it as an image layer or artifact in your registry. Sign images using Sigstore Cosign and enforce verification in CI/CD or via an admission policy.

Align your image to the CIS Docker Benchmark by running as non-root, minimizing capabilities, and pinning package versions.

Concrete steps:

Tip: fail builds on high-severity CVEs in your SBOM scan unless a documented exception exists. It’s easier than negotiating emergency changes after deployment.

Roadmap: ACC support and modern alternatives

ACC support has narrowed under Jakarta EE as vendors prioritize lightweight, HTTP-first stacks. GlassFish and Payara continue to support an application client container and appclient workflows. WildFly emphasizes remote EJB over HTTP remoting without a classic ACC. Open Liberty focuses on REST, MicroProfile, and JMS rather than remote EJB or ACC semantics.

Always verify current capabilities against the latest Jakarta EE Platform specifications, especially since EE 9+ transitioned to jakarta.* packages.

If you’re modernizing, map EJB remotes to REST/gRPC endpoints or messaging. REST with strong types (OpenAPI + codegen) and gRPC for low-latency RPCs reduce client runtime coupling. They also make zero-trust and observability simpler.

For interactive use cases, consider web UI with WebSockets for real-time updates. For headless operators, a CLI client in Java SE with OAuth2 or mTLS is usually leaner than ACC and easier to containerize.

Migration tips:

Final thought: choose the smallest sufficient runtime for your client. If ACC features are essential today, use them confidently and containerize with the security and networking patterns above. But if your path forward is HTTP-first, start the migration now. The operational simplicity and ecosystem support will compound quickly.