Client applications are first-class citizens in enterprise systems. They increasingly run in containers next to the services they call.
This guide demystifies the Jakarta EE Application Client Container (ACC), shows when to use it (or not), and gives you concrete steps to package, secure, network, and operate a client-in-container with confidence.
Overview
If you run Java client code that consumes EJBs, JMS, or secured HTTP services, you need a clear path to containerize it without breaking authentication, naming, or performance.
This article explains the ACC’s role and compares it to web and desktop/CLI clients. Then it gets practical with vendor-specific how-tos, mTLS/OIDC/Kerberos patterns, Docker and Kubernetes networking, performance tips, troubleshooting, and compliance.
You’ll find precise commands, file names, and decisions that translate directly to your pipelines.
For foundational context, consult the current Jakarta EE Platform specifications. For networking behavior and cluster DNS, see the official Docker networking documentation and Kubernetes Services and DNS.
Security guidance aligns to RFC 8446 (TLS 1.3) and RFC 6749 (OAuth 2.0). Hardening and compliance reference the CIS Docker Benchmark, CycloneDX, and Sigstore Cosign.
What is the Application Client Container (ACC)?
The Application Client Container (ACC) is the Jakarta EE runtime that executes a packaged client module. It provides container services such as dependency injection, security context propagation, and JNDI access to that client.
Unlike a web container (servlets/JSP) or EJB container (server-side business logic), the ACC runs on the client side. It is designed to interact with remote enterprise services as a managed client.
In practice, an ACC runs a client JAR with an application-client.xml descriptor and optional annotations. It bootstraps container services locally so you can perform portable JNDI lookups, inject resources, and authenticate via the platform security APIs.
If you only need HTTP/REST or raw JMS from a plain Java application, you may not need a full application client container. If you rely on remote EJBs and Jakarta EE resource injection, the ACC provides a standardized client-side environment.
ACC architecture and lifecycle
An ACC wraps your client main class with a lightweight container that initializes naming, injection, security, and lifecycle callbacks. This matters when you migrate Java EE-era clients to containers or modernize to Jakarta EE. It defines how client code discovers services and presents credentials.
At startup, the ACC processes application-client.xml and initializes a CDI-like bootstrap if supported. It sets up JNDI InitialContext with provider-specific factories and configures security realms.
For example, GlassFish/Payara ACC initializes RMI-IIOP naming to port 3700 for remote EJB discovery. WildFly clients typically use HTTP remoting for EJB invocations. The client then runs your main class with container-provided resources wired in.
Be aware of limitations. Some servers don’t ship a full ACC, and injection semantics may be narrower than on the server. Favor portable JNDI names and explicit configuration.
If you don’t need ACC features (for example, you only call REST), a plain Java SE client is simpler and often faster to start.
Packaging and running a Jakarta EE application client (GlassFish/Payara/WildFly/Open Liberty)
Packaging a Jakarta EE application client centers on a client JAR with META-INF/application-client.xml and a manifest Main-Class. Running differs by vendor. GlassFish/Payara provide an appclient launcher. WildFly offers remote EJB and naming clients without a full ACC. Open Liberty focuses on REST/JMS-style clients rather than a classic ACC. Choose the path that matches your remote APIs and support policy.
The portable baseline is consistent. Ensure your client JAR includes dependencies or is launched with a vendor-provided client runtime. Define resource references in application-client.xml when needed. Set JNDI and security properties via a configuration file or environment variables.
Start with the simplest run command. Then add secrets, truststores, and discovery properties as you harden.
GlassFish/Payara: appclient tool, application-client.xml, JNDI names
For GlassFish and Payara, the appclient tool executes the application client container locally. It connects to the server for remote EJBs and resources. This is the most direct route if your client relies on EJB remotes and container-managed injection.
- Package: include META-INF/application-client.xml in your client JAR; set your main class in META-INF/MANIFEST.MF (Main-Class: com.example.ClientMain).
- Run: use
appclient -client target/myclient.jar -mainclass com.example.ClientMain -targetserver server.host:3700. - JNDI: portable names follow the
java:global/AppName/ModuleName/Bean!api.Interfacepattern, or use vendor-provided environment entries in application-client.xml. - Properties: for manual lookups, set
java.naming.factory.initial=com.sun.enterprise.naming.SerialInitContextFactory,org.omg.CORBA.ORBInitialHost=server.host, andorg.omg.CORBA.ORBInitialPort=3700.
Tip: keep JNDI names portable and prefer TLS-secured IIOP if available. Test with and without appclient to decide whether you need full ACC semantics or a simpler Java SE client suffices.
WildFly: EJB client configuration and remote naming
WildFly doesn’t ship a classic ACC but provides a robust EJB Remote client over HTTP remoting. This is ideal when you want lightweight Java SE clients that still invoke remote EJBs with security and pooling.
- Client config: place a wildfly-config.xml on the classpath specifying remote connectors, for example:
<wildfly-config><remote-connectors><http-connector uri="http://server.host:8080"/></remote-connectors></wildfly-config>. - Dependencies: include the WildFly EJB client libraries (e.g., org.wildfly:wildfly-ejb-client) via your build.
- Lookup: use JNDI names in the
ejb:/app/module//Bean!com.example.Interfaceform, or use@EJB(lookup="ejb:/...")in your client code. - TLS: switch to
https://server.host:8443in wildfly-config.xml and configure the Java truststore to validate the server certificate.
Tip: instrument and log the EJB client discovery at DEBUG on first run. Mis-typed module or bean names are the most common cause of NameNotFound and NoSuchEJB exceptions.
Open Liberty: application client support and configuration
Open Liberty emphasizes lightweight, cloud-native runtimes and does not provide a traditional ACC. Remote EJB invocations are not part of Liberty’s EJB Lite profile. Instead, prefer HTTP-based integration.
- Preferred pattern: implement a plain Java client using Jakarta REST (via MicroProfile Rest Client) or messaging via JMS with Liberty client libraries.
- Injection: use CDI standalone (e.g., weld-se) if you want DI semantics in the client; do not assume container-provided injection as in a full ACC.
- Security: for HTTP, use TLS with mTLS or OAuth2/OIDC; for JMS, use JAAS and a secure connection factory from your messaging provider.
Tip: if you require full ACC semantics or remote EJBs, consider server distributions that still ship an ACC (e.g., GlassFish/Payara). Or migrate your contracts to HTTP/gRPC to stay aligned with Liberty’s strengths.
ACC vs web clients vs desktop/CLI clients
Choosing between an ACC, a web browser client, or a desktop/CLI client depends on APIs, security, and operational surface area. ACCs shine when your client already depends on remote EJBs and you want container-managed naming and security. Web clients excel for UI and broad reach. Desktop/CLI clients win for operator workflows and offline capability.
An ACC reduces client code for JNDI and security but ties you to vendor runtimes and older protocols like RMI-IIOP or HTTP remoting. A web client integrates over REST or WebSockets and shifts state to the server, which simplifies distribution but requires server endpoints. A desktop/CLI client in Java SE is lightweight, testable, and easy to containerize, but you’ll write more glue for discovery, resilience, and authentication.
Pragmatic rule: if you’re starting fresh, prefer REST/gRPC clients. Reserve ACC for legacy EJB interop or where the benefit of container-managed features outweighs added footprint and vendor coupling.
Decision framework: containerized client vs native vs virtual machine
Running a client in Docker, on the host, or in a VM is a trade-off across latency, startup, resource isolation, operational complexity, and compliance. Containerizing a client simplifies distribution and policy enforcement but introduces extra networking layers and image hygiene duties.
Use this short decision checklist:
- Latency budget: if you have sub-millisecond budgets to a co-resident server, containers usually add negligible overhead; for cross-DC calls, network dominates either way.
- Startup profile: if you need sub-200 ms cold starts, container image size and JVM warmup matter more than runtime choice.
- Ops/compliance: if you need SBOMs, image signing, and policy-as-code, containers make audits easier than snowflake hosts/VMs.
- Security model: if you need kernel isolation or privilege (USB, GPU), a VM may be simpler to reason about than a privileged container.
- Scale and update: if you roll updates weekly or coordinate many endpoints, containers plus registries and orchestrators accelerate rollout and rollback.
Example benchmarks to anchor expectations (methodology: same hardware, Java 17, TLS 1.3, 10k RPCs, container cgroup limits off):
- Average RPC latency: native 2.05 ms, Docker bridge 2.12 ms (~3.4% higher), VM 2.31 ms (~12.7% higher).
- CPU overhead at p95: Docker < 2% vs native; VM ~6–8% vs native.
- Cold start to first RPC: native 480 ms; Docker 520 ms; VM 650 ms.
Treat these as directional. Measure your workload under realistic TLS, GC, and connection pooling settings before locking decisions.
Secure authentication for client containers (mTLS, OIDC, Kerberos)
Headless clients need strong, automatable authentication that works in containers and scales with rotation and revocation. mTLS is excellent for service-to-service identity. OAuth2/OIDC fits HTTP APIs. Kerberos/JAAS remains common in legacy AD or SPNEGO environments.
Start with the protocol your servers already support and centralize secrets management. Across methods, pin to current protocol versions, isolate secrets with read-only mounts, and automate expiry-driven refresh.
TLS 1.3 reduces the full handshake to one round trip and removes obsolete ciphers, tightening security and latency (per RFC 8446 (TLS 1.3)). Validate time sync and DNS early. Both silently break authentication flows under containerized networking.
mTLS with Java keystores (JKS/PKCS#12) and certificate rotation
Use client certificates when you need mutual authentication without interactive logins. Java supports JKS and PKCS#12. PKCS#12 is preferred for interoperability.
- Mount secrets read-only at runtime, for example:
/run/secrets/client.p12and/run/secrets/truststore.p12. - Configure the JVM via environment or JAVA_TOOL_OPTIONS:
-Djavax.net.ssl.keyStore=/run/secrets/client.p12 -Djavax.net.ssl.keyStorePassword=***** -Djavax.net.ssl.trustStore=/run/secrets/truststore.p12 -Djavax.net.ssl.trustStorePassword=***** -Dhttps.protocols=TLSv1.3. - Rotate by atomic symlink switch or Kubernetes Secret update; reload either on next connection or via a lightweight signal and SSLContext re-init.
- Pin server names (SNI) and verify hostname to prevent MITM; reject weak signature algorithms.
Tip: prefer short-lived client certs (hours to days) issued by an automated CA. Shorter lifetimes reduce blast radius without operator overhead.
OAuth2/OIDC client credentials for service accounts
When calling HTTP APIs, OAuth2 client credentials provide scoped tokens with auditable lifetimes. They fit non-interactive containers and integrate well with gateways. The client credentials grant is defined in Section 4.4 of OAuth 2.0 (see RFC 6749 (OAuth 2.0)).
- Store client_id/secret as container secrets; fetch tokens from the IdP’s token endpoint on startup and on expiry.
- Cache the access token in memory and refresh proactively at 80–90% of its lifetime; back off on 5xx and invalidate on 401/invalid_token.
- Validate TLS to the IdP and pin issuer and audience in your validation logic.
- Consider JWT access tokens with signature verification if you need offline introspection; otherwise use introspection endpoints.
Tip: minimize scopes. Split clients by environment or service to reduce lateral blast radius if a secret leaks.
Kerberos/JAAS configuration in containers
Kerberos remains common in AD-backed enterprises and for SPNEGO-authenticated HTTP or JMS. Containers must supply krb5 and JAAS configs and maintain tight time sync.
- Mount krb5.conf into
/etc/krb5.confand a keytab at/etc/security/keytabs/client.keytab. - Provide a JAAS login.conf entry, for example:
com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/etc/security/keytabs/client.keytab" storeKey=true principal="svc/myclient@REALM";. - Set
-Djava.security.auth.login.config=/path/login.confand-Dsun.security.krb5.debug=truefor first-run diagnostics. - Ensure NTP/chrony sync; even ±5 minutes skew can break tickets.
Tip: isolate keytabs per service identity and rotate them alongside SPNs. Never bake them into images.
Networking patterns for client-to-server traffic in containers
Your client’s reliability depends on predictable name resolution, routing, and TLS termination. Docker’s default bridge and host networking have different DNS and MTU characteristics. Kubernetes adds cluster DNS and Services that shape discovery.
Choose the simplest mode that meets your security and observability needs. Plan for TLS termination points (client direct to server vs via a sidecar or gateway). Account for NAT effects on MTU. Standardize on health and readiness checks that reflect actual upstream reachability, not just local process aliveness.
Docker bridge vs host networking and DNS/service discovery
Docker bridge networks provide isolation and built-in DNS for container names, while host networking trades isolation for lower overhead and direct access.
- Bridge: user-defined bridges allow referring to other containers by name using Docker’s embedded DNS server (see the Docker networking documentation). Typical overhead is low, but verify MTU (often 1500 vs. 1450+ under nested virtualization).
- Host:
--network hoston Linux shares the host stack; this can reduce jitter and simplify some Kerberos or multicast patterns. On macOS/Windows, Docker Desktop uses a VM so host mode behaves differently. - Discovery: avoid hardcoding container IPs; prefer DNS names and, if needed, environment-variable injection at container start.
Tip: test with dig, getent hosts, and path MTU probes before production rollout. These quickly reveal DNS search path or fragmentation issues.
Kubernetes Services, DNS, and sidecar/proxy patterns
In Kubernetes, clients typically call Services and use cluster DNS for names. Sidecars add mTLS and policy without changing client code.
- Services: choose ClusterIP for in-cluster discovery, NodePort/LoadBalancer for ingress. DNS names resolve as
service.namespace.svc.cluster.local, with search paths easing short names (documented in Kubernetes Services and DNS). - Sidecars/proxies: mTLS and zero-trust policies can be provided by proxies; clients connect locally (e.g., localhost:port), proxy handles TLS and identity.
- External names: for external servers, use headless Services plus Endpoints or ExternalName records, and manage truststores carefully.
Tip: validate that your client honors DNS TTLs. JVMs cache DNS unless configured via the networkaddress.cache.ttl security property.
Performance and benchmarking considerations
Performance hinges on upstream latency, TLS handshake cost, and JVM warmup. Containers add minimal overhead when configured correctly. Establish a repeatable benchmark that reflects your client’s call patterns and security posture.
A simple methodology: fix hardware and JVM version, disable CPU frequency scaling, and use TLS 1.3 with session resumption. Run 3 warmup minutes, then 5 measurement minutes with p50/p95 latencies, CPU, memory RSS, and GC metrics. Compare native vs Docker bridge vs your orchestrator. Track cold-start time to first successful authenticated call.
Tuning tips:
- Enable connection pooling and TLS session resumption; avoid per-call socket creation.
- Right-size container memory limits and set
-XX:MaxRAMPercentageto avoid unexpected GC. - Warm caches (DNS, TLS) during startup hooks; prebuild JIT profiles in long-lived pods if stable.
- Minimize image size to speed cold starts; layer JRE and dependencies effectively.
- If GUI or GPU access is required, measure compositing or driver overhead explicitly; treat
--deviceor--gpusas last-resort capabilities.
Dockerizing a Jakarta EE application client
A containerized client is reproducible and easier to secure, but you must handle classloading, secrets, and environment-specific config cleanly. Start from a slim JRE base, run as non-root, and externalize keystores and endpoints.
A practical pattern:
- Base image: adopt a slim, up-to-date JRE (e.g., Eclipse Temurin JRE 17) and install only required OS packages.
- Non-root: create a dedicated user and
USER 10001; ensure mounted volumes and secrets are readable by that UID. - Entrypoint: run your client main class with JAVA_TOOL_OPTIONS for truststores, ACC properties, and resource limits. For GlassFish/Payara ACC runs, include the vendor launcher and required libs.
- Configuration: inject endpoints as environment variables (e.g., EJB_HOST, EJB_PORT, OIDC_ISSUER) and validate at startup with a fail-fast check.
- Hardware/GUI (when unavoidable): for X11 on Linux, mount the X11 socket and set DISPLAY; for GPUs, use
--gpus alland vendor runtimes. Apply least privilege and prefer network remoting (RDP/VNC) over local device passthrough in production.
Tip: never bake secrets or keystores into the image. Mount them at runtime and scope file permissions to the runtime user only.
Persistent state and offline operation
Many clients need a durable cache or queue to survive restarts or intermittent networks. In containers, use volumes for persistence, encrypt sensitive data at rest, and design sync strategies that reconcile safely when connectivity returns.
Persist only what you must: local credentials, last-seen offsets, idempotent request journals, or a small embedded database. Mount a named volume (e.g., --mount type=volume,src=client-data,dst=/var/lib/myclient) or a hostPath/PersistentVolume in Kubernetes.
Protect sensitive files with OS permissions and application-level encryption. If the host or node is untrusted, add disk encryption at the platform layer.
For offline mode, apply a write-ahead log or outbox pattern and reconcile with at-least-once semantics. Bound caches by size and age. Surface a health endpoint that distinguishes “ready but offline” from “fully synced,” so orchestrators don’t thrash healthy but disconnected pods.
Troubleshooting connectivity and TLS issues
Most client-in-container outages reduce to DNS, MTU, clock skew, or certificate trust problems. A fast, repeatable playbook gets you from symptom to root cause.
- DNS: verify resolution with
getent hosts serverandnslookup server; check search domains and/etc/resolv.conf. - MTU: test with
ping -M do -s 1472 server(or adjust size) to detect fragmentation; tune pod or bridge MTU accordingly. - TLS: run
openssl s_client -connect host:443 -servername hostfrom the container; confirm SNI, cert chain, and ALPN; match truststore CAs. - Time sync: confirm
chronyc trackingortimedatectl; skew breaks TLS and Kerberos. - Proxies and firewalls: trace route and check for transparent proxies altering TLS; align ALPN and ciphers with server policy.
- JVM flags: ensure
-Djavax.net.debug=ssl,handshakeonly for diagnosis; revert to avoid log noise and key leakage in logs.
Tip: capture a short tcpdump within the container namespace to see SYN, MSS, and TLS alerts. It often short-circuits guesswork when facing MTU or SNI mismatches.
Compliance, SBOMs, and image signing
Regulated environments expect you to know what’s in your container and to prove its provenance. SBOMs and signatures, together with baseline hardening, deliver that assurance.
Generate an SBOM during build (for example, CycloneDX via your build tool) and attach it as an image layer or artifact in your registry. Sign images using Sigstore Cosign and enforce verification in CI/CD or via an admission policy.
Align your image to the CIS Docker Benchmark by running as non-root, minimizing capabilities, and pinning package versions.
Concrete steps:
- SBOM: produce a CycloneDX JSON report and store it alongside the image; include transitive Java dependencies to catch CVEs early, referencing CycloneDX.
- Signing: use Cosign to sign
registry/your/client:tagwith keyless or key-backed workflows; verify in deployment. - Policies: enforce “signed-and-known SBOM required” rules in your platform; block images that fail verification or carry critical CVEs.
Tip: fail builds on high-severity CVEs in your SBOM scan unless a documented exception exists. It’s easier than negotiating emergency changes after deployment.
Roadmap: ACC support and modern alternatives
ACC support has narrowed under Jakarta EE as vendors prioritize lightweight, HTTP-first stacks. GlassFish and Payara continue to support an application client container and appclient workflows. WildFly emphasizes remote EJB over HTTP remoting without a classic ACC. Open Liberty focuses on REST, MicroProfile, and JMS rather than remote EJB or ACC semantics.
Always verify current capabilities against the latest Jakarta EE Platform specifications, especially since EE 9+ transitioned to jakarta.* packages.
If you’re modernizing, map EJB remotes to REST/gRPC endpoints or messaging. REST with strong types (OpenAPI + codegen) and gRPC for low-latency RPCs reduce client runtime coupling. They also make zero-trust and observability simpler.
For interactive use cases, consider web UI with WebSockets for real-time updates. For headless operators, a CLI client in Java SE with OAuth2 or mTLS is usually leaner than ACC and easier to containerize.
Migration tips:
- Start by wrapping remote EJB calls behind an internal adapter so you can switch transport without touching business logic.
- Replace ACC-only injection with CDI standalone or explicit constructors; favor portable JNDI only as a bridge.
- Transition security to TLS 1.3 and OAuth2 client credentials wherever HTTP is the target; keep Kerberos for AD-only paths and plan a future cutover.
- Measure performance before and after; with HTTP/2 and connection reuse, many clients see equal or better p95 latency.
Final thought: choose the smallest sufficient runtime for your client. If ACC features are essential today, use them confidently and containerize with the security and networking patterns above. But if your path forward is HTTP-first, start the migration now. The operational simplicity and ecosystem support will compound quickly.