request help: load(): failed to load plugins: failed to read plugin list from local file, context: init_worker_by_lua*
Issue description
2021/08/19 14:39:40 [error] 98#98: 1564 [lua] plugin.lua:178: load(): failed to load plugins: failed to read plugin list from local file, context: init_worker_by_lua
I checked the configmap of k8s, and I think there is no problem
This log prompts me that it is a configuration problem, but the prompt is not obvious. How to check this problem
apiVersion: v1
data:
config.yaml: |-
apisix:
node_listen: 33333 # APISIX listening port
enable_admin: true
enable_admin_cors: true # Admin API support CORS response headers.
enable_debug: false
enable_dev_mode: false # Sets nginx worker_processes to 1 if set to true
enable_reuseport: true # Enable nginx SO_REUSEPORT switch if set to true.
enable_ipv6: true
config_center: etcd # etcd: use etcd to store the config value
# yaml: fetch the config value from local yaml file `/your_path/conf/apisix.yaml`
#proxy_protocol: # Proxy Protocol configuration
# listen_http_port: 9181 # The port with proxy protocol for http, it differs from node_listen and port_admin.
# This port can only receive http request with proxy protocol, but node_listen & port_admin
# can only receive http request. If you enable proxy protocol, you must use this port to
# receive http request with proxy protocol
# listen_https_port: 9182 # The port with proxy protocol for https
# enable_tcp_pp: true # Enable the proxy protocol for tcp proxy, it works for stream_proxy.tcp option
# enable_tcp_pp_to_upstream: true # Enables the proxy protocol to the upstream server
proxy_cache: # Proxy Caching configuration
cache_ttl: 10s # The default caching time if the upstream does not specify the cache time
zones: # The parameters of a cache
- name: disk_cache_one # The name of the cache, administrator can be specify
# which cache to use by name in the admin api
memory_size: 50m # The size of shared memory, it's used to store the cache index
disk_size: 1G # The size of disk, it's used to store the cache data
disk_path: "/tmp/disk_cache_one" # The path to store the cache data
cache_levels: "1:2" # The hierarchy levels of a cache
# - name: disk_cache_two
# memory_size: 50m
# disk_size: 1G
# disk_path: "/tmp/disk_cache_two"
# cache_levels: "1:2"
allow_admin: # http://nginx.org/en/docs/http/ngx_http_access_module.html#allow
- all
# - 127.0.0.0/24 # If we don't set any IP list, then any IP access is allowed by default.
# - "::/64"
# port_admin: 9180 # use a separate port
# https_admin: true # enable HTTPS when use a separate port for Admin API.
# Admin API will use conf/apisix_admin_api.crt and conf/apisix_admin_api.key as certificate.
admin_api_mtls: # Depends on `port_admin` and `https_admin`.
admin_ssl_cert: "" # Path of your self-signed server side cert.
admin_ssl_cert_key: "" # Path of your self-signed server side key.
admin_ssl_ca_cert: "" # Path of your self-signed ca cert.The CA is used to sign all admin api callers' certificates.
# Default token when use API to call for Admin API.
# *NOTE*: Highly recommended to modify this value to protect APISIX's Admin API.
# Disabling this configuration item means that the Admin API does not
# require any authentication.
admin_key:
-
name: "admin"
key: 66cc9d6f293245369721fd5700fc5e5e
role: admin # admin: manage all configuration data
# viewer: only can view configuration data
-
name: "viewer"
key: 66cc9d6f293245369721fd5700fc5e5e
role: viewer
delete_uri_tail_slash: false # delete the '/' at the end of the URI
router:
http: 'radixtree_uri' # radixtree_uri: match route by uri(base on radixtree)
# radixtree_host_uri: match route by host + uri(base on radixtree)
ssl: 'radixtree_sni' # radixtree_sni: match route by SNI(base on radixtree)
# stream_proxy: # TCP/UDP proxy
# tcp: # TCP proxy port list
# - 9100
# - 9101
# udp: # UDP proxy port list
# - 9200
# - 9211
dns_resolver: # If not set, read from `/etc/resolv.conf`
# - 10.68.0.2
# - 8.8.8.8
dns_resolver_valid: 30 # valid time for dns result 30 seconds
resolver_timeout: 5 # resolver timeout
ssl:
enable: true
enable_http2: true
listen_port: 38631
ssl_protocols: "TLSv1.2 TLSv1.3"
ssl_ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384"
key_encrypt_salt: "edd1c9f0985e76a2" # If not set, will save origin ssl key into etcd.
# If set this, must be a string of length 16. And it will encrypt ssl key with AES-128-CBC
# !!! So do not change it after saving your ssl, it can't decrypt the ssl keys have be saved if you change !!
# discovery: eureka # service discovery center
nginx_config: # config for render the template to genarate nginx.conf
error_log: "logs/error.log"
error_log_level: "error" # warn,error
worker_processes: auto
worker_rlimit_nofile: 20480 # the number of files a worker process can open, should be larger than worker_connections
worker_shutdown_timeout: 240s # timeout for a graceful shutdown of worker processes
event:
worker_connections: 10620
http:
access_log: "logs/access.log"
access_log_format: "$remote_addr - $remote_user [$time_local] $http_host \"$request\" $status $body_bytes_sent $request_time \"$http_referer\" \"$http_user_agent\" $upstream_addr $upstream_status $upstream_response_time"
keepalive_timeout: 60s # timeout during which a keep-alive client connection will stay open on the server side.
client_header_timeout: 60s # timeout for reading client request header, then 408 (Request Time-out) error is returned to the client
client_body_timeout: 60s # timeout for reading client request body, then 408 (Request Time-out) error is returned to the client
send_timeout: 10s # timeout for transmitting a response to the client.then the connection is closed
underscores_in_headers: "on" # default enables the use of underscores in client request header fields
real_ip_header: "X-Real-IP" # http://nginx.org/en/docs/http/ngx_http_realip_module.html#real_ip_header
real_ip_from: # http://nginx.org/en/docs/http/ngx_http_realip_module.html#set_real_ip_from
- 127.0.0.1
- 'unix:'
#lua_shared_dicts: # add custom shared cache to nginx.conf
# ipc_shared_dict: 100m # custom shared cache, format: `cache-key: cache-size`
lua_shared_dicts:
skywalking2-tracing-buffer: 100m
plugin-limit-conn2: 10m
etcd:
host: # it's possible to define multiple etcd hosts addresses of the same etcd cluster.
# multiple etcd address
- "http://etcd-2339fe28e36a46998814b4e90a8de007-0.etcd-2339fe28e36a46998814b4e90a8de007.z160000001302.svc.cluster.local.:2379"
- "http://etcd-2339fe28e36a46998814b4e90a8de007-1.etcd-2339fe28e36a46998814b4e90a8de007.z160000001302.svc.cluster.local.:2379"
- "http://etcd-2339fe28e36a46998814b4e90a8de007-2.etcd-2339fe28e36a46998814b4e90a8de007.z160000001302.svc.cluster.local.:2379"
prefix: "/apisix" # apisix configurations prefix
timeout: 30 # 30 seconds
#user: root # root username for etcd
password: c7db345831054c7fadfad88232831a96 # root password for etcd
#eureka:
# host: # it's possible to define multiple eureka hosts addresses of the same eureka cluster.
# - "http://127.0.0.1:8761"
# prefix: "/eureka/"
# fetch_interval: 30 # default 30s
# weight: 100 # default weight for node
# timeout:
# connect: 2000 # default 2000ms
# send: 2000 # default 2000ms
# read: 5000 # default 5000ms
plugins:
#- example-plugin
#- limit-req
#- limit-count
#- limit-conn
#- key-auth
#- basic-auth
- prometheus
#- node-status
#- jwt-auth
#- zipkin
#- ip-restriction
#- grpc-transcode
#- serverless-pre-function
#- serverless-post-function
#- openid-connect
#- proxy-rewrite
#- redirect
#- response-rewrite
#- fault-injection
#- udp-logger
#- wolf-rbac
#- tcp-logger
#- kafka-logger
#- cors
#- consumer-restriction
#- syslog
#- batch-requests
#- http-logger
#- skywalking
#- echo
#- authz-keycloak
#- uri-blocker
#- request-validation
#- proxy-cache
#- proxy-mirror
#- skywalking2
- log-rotate
#- proxy-rewrite-lua
stream_plugins:
- mqtt-proxy
plugin_attr:
log-rotate:
interval: 86400 # rotate interval (unit: second)
max_kept: 10 # max number of log files will be kept
kind: ConfigMap
metadata:
creationTimestamp: "2021-08-19T02:28:47Z"
labels:
clus_id: 2339fe28e36a46998814b4e90a8de007
gw_inst_id: f917682d23d64f3b9257609ee20e9882
gw_name_b64: b645rWB6YeP572R5YWzMg..0
name: apisix-cm-2339fe28e36a46998814b4e90a8de007
namespace: z160000001302
resourceVersion: "1280314"
selfLink: /api/v1/namespaces/z160000001302/configmaps/apisix-cm-2339fe28e36a46998814b4e90a8de007
uid: 2f7f5616-0a75-49b1-9513-946405f2d9b8
Environment
- apisix version (1.5):
- OS (Linux node143 4.9.6-1.el7.elrepo.x86_64):
- OpenResty / Nginx version (nginx version: openresty/1.17.8.2):
- k8s v1.15.12
2021/08/19 14:39:40 [error] 98#98: 1564 [lua] plugin.lua:178: load(): failed to load plugins: failed to read plugin list from local file, context: init_worker_by_lua
The error indicates that plugins can't be found in the config.yaml.
However, your configuration looks good, and I have verified it on my side.
Maybe something wrong happened in the environment. Could you add more log to the plugin.lua? And print the configuration file content in the pod may help.
2021/08/19 14:39:40 [error] 98#98: 1564 [lua] plugin.lua:178: load(): failed to load plugins: failed to read plugin list from local file, context: init_worker_by_lua
The error indicates that
pluginscan't be found in the config.yaml.However, your configuration looks good, and I have verified it on my side.
Maybe something wrong happened in the environment. Could you add more log to the plugin.lua? And print the configuration file content in the pod may help.
Is it convenient to give code examples?
Just dump the part you are interested in.
Just dump the part you are interested in.
I don't understand
Maybe something wrong happened in the environment. Could you add more log to the plugin.lua? And print the configuration file content in the pod may help.
print configuration file content ? Isn't the configuration file correct?
This issue has been marked as stale due to 350 days of inactivity. It will be closed in 2 weeks if no further activity occurs. If this issue is still relevant, please simply write any comment. Even if closed, you can still revive the issue at any time or discuss it on the [email protected] list. Thank you for your contributions.
This issue has been closed due to lack of activity. If you think that is incorrect, or the issue requires additional review, you can revive the issue at any time.