neo4j更改apoc设置,重启失败

neo4j更改apoc设置后,重启失败。我已经安装好了apoc插件,然后再设置 apoc.import.file.enabled=true,然后重启neo4j,DBMS打不开了。
报错的信息如下

img

img

以下是我的配置文档全文,各位看看到底是哪出了问题,我只在原来的基础上加上了apoc.import.file.enabled=true,在此感谢。

#*****************************************************************
# Neo4j configuration
#
# For more details and a complete list of settings, please see
# https://neo4j.com/docs/operations-manual/current/reference/configuration-settings/
#*****************************************************************

# Paths of directories in the installation.
#server.directories.data=data
#server.directories.plugins=plugins
#server.directories.logs=logs
#server.directories.lib=lib
#server.directories.run=run
#server.directories.licenses=licenses
#server.directories.metrics=metrics
#server.directories.transaction.logs.root=data/transactions
#server.directories.dumps.root=data/dumps

# This setting constrains all `LOAD CSV` import files to be under the `import` directory. Remove or comment it out to
# allow files to be loaded from anywhere in the filesystem; this introduces possible security problems. See the
# `LOAD CSV` section of the manual for details.
server.directories.import=import

# Whether requests to Neo4j are authenticated.
# To disable authentication, uncomment this line
dbms.security.auth_enabled=true

# Number of databases in Neo4j is limited.
# To change this limit please uncomment and adapt following setting:
# server.max_databases=100

# Enable online backups to be taken from this database.
#server.backup.enabled=true

# By default the backup service will only listen on localhost.
# To enable remote backups you will have to bind to an external
# network interface (e.g. 0.0.0.0 for all interfaces).
# The protocol running varies depending on deployment. In a cluster this is the
# same protocol that runs on server.cluster.listen_address.
#server.backup.listen_address=0.0.0.0:6362

#*****************************************************************
# Initial DBMS Settings
#*****************************************************************

# Initial DBMS settings are picked up from the config file once, when a cluster first starts, and then transferred into
# the running DBMS. This means later changes to the values will not be seen. There are procedures to change the values
# after the initial start

# Name of the default database (aliases are not supported). Can be changed with the 'dbms.setDefaultDatabase' procedure.
#initial.dbms.default_database=neo4j

# Initial default number of primary and secondary instances of user databases. If the user does not specify the number
# of primaries and secondaries in 'CREATE DATABASE', these values will be used, unless they are overwritten with the
# 'dbms.setDefaultAllocationNumbers' procedure.
#initial.dbms.default_primaries_count=1
#initial.dbms.default_secondaries_count=0

#********************************************************************
# Memory Settings
#********************************************************************
#
# Memory settings are specified kilobytes with the 'k' suffix, megabytes with
# 'm' and gigabytes with 'g'.
# If Neo4j is running on a dedicated server, then it is generally recommended
# to leave about 2-4 gigabytes for the operating system, give the JVM enough
# heap to hold all your transaction state and query context, and then leave the
# rest for the page cache.

# Java Heap Size: by default the Java heap size is dynamically calculated based
# on available system resources. Uncomment these lines to set specific initial
# and maximum heap size.
#server.memory.heap.initial_size=512m
#server.memory.heap.max_size=512m

# The amount of memory to use for mapping the store files.
# The default page cache memory assumes the machine is dedicated to running
# Neo4j, and is heuristically set to 50% of RAM minus the Java heap size.
#server.memory.pagecache.size=10g

# Limit the amount of memory that all of the running transaction can consume.
# The default value is 70% of the heap size limit.
#dbms.memory.transaction.total.max=256m

# Limit the amount of memory that a single transaction can consume.
# By default there is no limit.
#db.memory.transaction.max=16m

# Transaction state location. It is recommended to use ON_HEAP.
# db.tx_state.memory_allocation=ON_HEAP

#*****************************************************************
# Network connector configuration
#*****************************************************************

# With default configuration Neo4j only accepts local connections.
# Use 0.0.0.0 to bind to all network interfaces on the machine. If you want to only use a specific interface
# (such as a private IP address on AWS, for example) then use that IP address instead.
#server.default_listen_address=0.0.0.0

# You can also choose a specific network interface, and configure a non-default
# port for each connector, by setting their individual listen_address.

# The address at which this server can be reached by its clients. This may be the server's IP address or DNS name, or
# it may be the address of a reverse proxy which sits in front of the server. This setting may be overridden for
# individual connectors below.
#server.default_advertised_address=localhost

# You can also choose a specific advertised hostname or IP address, and
# configure an advertised port for each connector, by setting their
# individual advertised_address.

# By default, encryption is turned off.
# To turn on encryption, an ssl policy for the connector needs to be configured
# Read more in SSL policy section in this file for how to define a SSL policy.

# Bolt connector
server.bolt.enabled=true
#server.bolt.tls_level=DISABLED
#server.bolt.listen_address=:7687
#server.bolt.advertised_address=:7687

# HTTP Connector. There can be zero or one HTTP connectors.
server.http.enabled=true
#server.http.listen_address=:7474
#server.http.advertised_address=:7474

# HTTPS Connector. There can be zero or one HTTPS connectors.
server.https.enabled=false
#server.https.listen_address=:7473
#server.https.advertised_address=:7473

# Number of Neo4j worker threads.
#server.threads.worker_count

#*****************************************************************
# SSL policy configuration
#*****************************************************************

# Each policy is configured under a separate namespace, e.g.
#    dbms.ssl.policy..*
#     can be any of 'bolt', 'https', 'cluster' or 'backup'
#
# The scope is the name of the component where the policy will be used
# Each component where the use of an ssl policy is desired needs to declare at least one setting of the policy.
# Allowable values are 'bolt', 'https', 'cluster' or 'backup'.

# E.g if bolt and https connectors should use the same policy, the following could be declared
#   dbms.ssl.policy.bolt.base_directory=certificates/default
#   dbms.ssl.policy.https.base_directory=certificates/default
# However, it's strongly encouraged to not use the same key pair for multiple scopes.
#
# N.B: Note that a connector must be configured to support/require
#      SSL/TLS for the policy to actually be utilized.
#
# see: dbms.connector.*.tls_level

# SSL settings (dbms.ssl.policy..*)
#  .base_directory       Base directory for SSL policies paths. All relative paths within the
#                        SSL configuration will be resolved from the base dir.
#
#  .private_key          A path to the key file relative to the '.base_directory'.
#
#  .private_key_password The password for the private key.
#
#  .public_certificate   A path to the public certificate file relative to the '.base_directory'.
#
#  .trusted_dir          A path to a directory containing trusted certificates.
#
#  .revoked_dir          Path to the directory with Certificate Revocation Lists (CRLs).
#
#  .verify_hostname      If true, the server will verify the hostname that the client uses to connect with. In order
#                        for this to work, the server public certificate must have a valid CN and/or matching
#                        Subject Alternative Names.
#
#  .client_auth          How the client should be authorized. Possible values are: 'none', 'optional', 'require'.
#
#  .tls_versions         A comma-separated list of allowed TLS versions. By default only TLSv1.2 is allowed.
#
#  .trust_all            Setting this to 'true' will ignore the trust truststore, trusting all clients and servers.
#                        Use of this mode is discouraged. It would offer encryption but no security.
#
#  .ciphers              A comma-separated list of allowed ciphers. The default ciphers are the defaults of
#                        the JVM platform.

# Bolt SSL configuration
#dbms.ssl.policy.bolt.enabled=true
#dbms.ssl.policy.bolt.base_directory=certificates/bolt
#dbms.ssl.policy.bolt.private_key=private.key
#dbms.ssl.policy.bolt.public_certificate=public.crt
#dbms.ssl.policy.bolt.client_auth=NONE

# Https SSL configuration
#dbms.ssl.policy.https.enabled=true
#dbms.ssl.policy.https.base_directory=certificates/https
#dbms.ssl.policy.https.private_key=private.key
#dbms.ssl.policy.https.public_certificate=public.crt
#dbms.ssl.policy.https.client_auth=NONE

# Cluster SSL configuration
#dbms.ssl.policy.cluster.enabled=true
#dbms.ssl.policy.cluster.base_directory=certificates/cluster
#dbms.ssl.policy.cluster.private_key=private.key
#dbms.ssl.policy.cluster.public_certificate=public.crt

# Backup SSL configuration
#dbms.ssl.policy.backup.enabled=true
#dbms.ssl.policy.backup.base_directory=certificates/backup
#dbms.ssl.policy.backup.private_key=private.key
#dbms.ssl.policy.backup.public_certificate=public.crt

#*****************************************************************
# Logging configuration
#*****************************************************************

# To enable HTTP logging, uncomment this line
#dbms.logs.http.enabled=true

# To enable GC Logging, uncomment this line
#server.logs.gc.enabled=true

# GC Logging Options
# see https://docs.oracle.com/en/java/javase/11/tools/java.html#GUID-BE93ABDC-999C-4CB5-A88B-1994AAAC74D5
#server.logs.gc.options=-Xlog:gc*,safepoint,age*=trace

# Number of GC logs to keep.
#server.logs.gc.rotation.keep_number=5

# Size of each GC log that is kept.
#server.logs.gc.rotation.size=20m

# Log executed queries. One of OFF, INFO and VERBOSE. INFO logs queries longer than a given threshold, VERBOSE logs start and end of all queries.
#db.logs.query.enabled=VERBOSE

# If the execution of query takes more time than this threshold, the query is logged. If set to zero then all queries
# are logged. Only used if `db.logs.query.enabled` is set to INFO
#db.logs.query.threshold=0

# Include parameters for the executed queries being logged (this is enabled by default).
#db.logs.query.parameter_logging_enabled=true

# The security log is always enabled when `dbms.security.auth_enabled=true`, for addition
# configuration, look at $NEO4J_HOME/conf/server-logs.xml

#*****************************************************************
# Cluster Configuration
#*****************************************************************

# Uncomment and specify these lines for running Neo4j in a cluster.
# See the cluster documentation at https://neo4j.com/docs/ for details.

# A comma-separated list of endpoints which a server should contact in order to discover other cluster members. It must
# be in the host:port format. For each machine in the cluster, the address will usually be the public ip address of
# that machine. The port will be the value used in the setting "server.discovery.advertised_address" of that server.
#dbms.cluster.discovery.endpoints=localhost:5000,localhost:5001,localhost:5002

# Host and port to bind the cluster member discovery management communication.
# This is the setting to add to the collection of addresses in dbms.cluster.discovery.endpoints.
#server.discovery.listen_address=:5000
#server.discovery.advertised_address=:5000

# Network interface and port for the transaction shipping server to listen on.
# Please note that it is also possible to run the backup client against this port so always limit access to it via the
# firewall and configure an ssl policy.
#server.cluster.listen_address=:6000
#server.cluster.advertised_address=:6000

# Network interface and port for the RAFT server to listen on.
#server.cluster.raft.listen_address=:7000
#server.cluster.raft.advertised_address=:7000

# Network interface and port for server-side routing within the cluster. This allows requests to be forwarded
# from one cluster member to another, if the requests can't be satisfied by the first member (e.g. write requests
# received by a non-leader).
#server.routing.listen_address=:7688
#server.routing.advertised_address=:7688

# List a set of names for groups to which this server should belong. This
# is a comma-separated list and names should only use alphanumericals
# and underscore. This can be used to identify groups of servers in the
# configuration for load balancing and replication policies.
#
# The main intention for this is to group servers, but it is possible to specify
# a unique identifier here as well which might be useful for troubleshooting
# or other special purposes.
#server.groups

#*****************************************************************
# Initial Server Settings
#*****************************************************************

# Initial server settings are used as the default values when enabling a server, but can be overridden by specifying
# options when calling ENABLE (relevant for servers in a cluster *after* those that form the initial cluster).

# Restrict the modes of database that can be hosted on this server
# Allowed values:
# PRIMARY - Host standalone databases, and members of the consensus quorum for a multi-primary database.
# SECONDARY - Only host read replicas, eventually-consistent read-only instances of databases.
# NONE - Can host any mode of database
#initial.server.mode_constraint=NONE

#*****************************************************************
# Cluster Load Balancing
#*****************************************************************

# N.B: Read the online documentation for a thorough explanation!

# Selects the load balancing plugin that shall be enabled.
#dbms.routing.load_balancing.plugin=server_policies

####### Examples for "server_policies" plugin #######

# Will select all available servers as the default policy, which is the
# policy used when the client does not specify a policy preference. The
# default configuration for the default policy is all().
#dbms.routing.load_balancing.config.server_policies.default=all()

# Will select servers in groups 'group1' or 'group2' under the default policy.
#dbms.routing.load_balancing.config.server_policies.default=groups(group1,group2)

# Slightly more advanced example:
# Will select servers in 'group1', 'group2' or 'group3', but only if there are at least 2.
# This policy will be exposed under the name of 'mypolicy'.
#dbms.routing.load_balancing.config.server_policies.mypolicy=groups(group1,group2,group3) -> min(2)

# Below will create an even more advanced policy named 'regionA' consisting of several rules
# yielding the following behaviour:
#
#            select servers in regionA, if at least 2 are available
# otherwise: select servers in regionA and regionB, if at least 2 are available
# otherwise: select all servers
#
# The intention is to create a policy for a particular region which prefers
# a certain set of local servers, but which will fallback to other regions
# or all available servers as required.
#
# N.B: The following configuration uses the line-continuation character \
#      which allows you to construct an easily readable rule set spanning
#      several lines.
#
#dbms.routing.load_balancing.config.server_policies.policyA=\
#groups(regionA) -> min(2);\
#groups(regionA,regionB) -> min(2);

# Note that implicitly the last fallback is to always consider all() servers,
# but this can be prevented by specifying a halt() as the last rule.
#
#dbms.routing.load_balancing.config.server_policies.regionA_only=\
#groups(regionA);\
#halt();

#*****************************************************************
# Cluster Additional Configuration Options
#*****************************************************************
# The following settings are used less frequently.
# If you don't know what these are, you don't need to change these from their default values.

# Cluster Routing Connector. Disable the opening of an additional port to allow
# for internal communication using the same security configuration as CLUSTER
#dbms.routing.enabled=false

# The time window within which the loss of the leader is detected and the first re-election attempt is held.
# The window should be significantly larger than typical communication delays to make conflicts unlikely.
#dbms.cluster.raft.leader_failure_detection_window=20s-23s

# The rate at which leader elections happen. Note that due to election conflicts it might take several attempts to
# find a leader. The window should be significantly larger than typical communication delays to make conflicts unlikely.
#dbms.cluster.raft.election_failure_detection_window=3s-6s

# The time limit allowed for a new member to attempt to update its data to match the rest of the cluster.
#dbms.cluster.raft.membership.join_timeout=10m

# Maximum amount of lag accepted for a new follower to join the Raft group.
#dbms.cluster.raft.membership.join_max_lag=10s

# Raft log pruning frequency.
#dbms.cluster.raft.log.pruning_frequency=10m

# The size to allow the raft log to grow before rotating.
#dbms.cluster.raft.log.rotation_size=250M

# The name of a server_group whose members should be prioritized as leaders for the given database.
# This does not guarantee that members of this group will be leader at all times, but the cluster
# will attempt to transfer leadership to such a member when possible.
# N.B. the final portion of this config key is dynamic and refers to the name of the database being configured.
# You may specify multiple `db.cluster.raft.leader_transfer.priority_group.=` pairs:
#db.cluster.raft.leader_transfer.priority_group.foo
#db.cluster.raft.leader_transfer.priority_group.neo4j

# Which strategy to use when transferring database leaderships around a cluster.
# This can be one of `equal_balancing` or `no_balancing`.
# `equal_balancing` automatically ensures that each Core server holds the leader role for an equal number of databases.
# `no_balancing` prevents any automatic balancing of the leader role.
# Note that if a `leadership_priority_group` is specified for a given database,
# the value of this setting will be ignored for that database.
#dbms.cluster.raft.leader_transfer.balancing_strategy=equal_balancing

# The following setting controls how frequently a server hosting a secondary for a given database attempts to
# fetch an update from a server hosting a primary for that database
#db.cluster.catchup.pull_interval=1s

#********************************************************************
# Security Configuration
#********************************************************************

# The authentication and authorization providers that contains both users and roles.
# This can be one of the built-in `native` or `ldap` auth providers,
# or it can be an externally provided plugin, with a custom name prefixed by `plugin`,
# i.e. `plugin-`.
dbms.security.authentication_providers=native,plugin-com.neo4j.plugin.jwt.auth.JwtAuthPlugin
dbms.security.authorization_providers=native,plugin-com.neo4j.plugin.jwt.auth.JwtAuthPlugin

# The time to live (TTL) for cached authentication and authorization info when using
# external auth providers (LDAP or plugin). Setting the TTL to 0 will
# disable auth caching.
#dbms.security.auth_cache_ttl=10m

# The maximum capacity for authentication and authorization caches (respectively).
#dbms.security.auth_cache_max_capacity=10000

# Set to log successful authentication events to the security log.
# If this is set to `false` only failed authentication events will be logged, which
# could be useful if you find that the successful events spam the logs too much,
# and you do not require full auditing capability.
#dbms.security.log_successful_authentication=true

#================================================
# LDAP Auth Provider Configuration
#================================================

# URL of LDAP server to use for authentication and authorization.
# The format of the setting is `://:`, where hostname is the only required field.
# The supported values for protocol are `ldap` (default) and `ldaps`.
# The default port for `ldap` is 389 and for `ldaps` 636.
# For example: `ldaps://ldap.example.com:10389`.
#
# NOTE: You may want to consider using STARTTLS (`dbms.security.ldap.use_starttls`) instead of LDAPS
# for secure connections, in which case the correct protocol is `ldap`.
#dbms.security.ldap.host=localhost

# Use secure communication with the LDAP server using opportunistic TLS.
# First an initial insecure connection will be made with the LDAP server, and then a STARTTLS command
# will be issued to negotiate an upgrade of the connection to TLS before initiating authentication.
#dbms.security.ldap.use_starttls=false

# The LDAP referral behavior when creating a connection. This is one of `follow`, `ignore` or `throw`.
# `follow` automatically follows any referrals
# `ignore` ignores any referrals
# `throw` throws an exception, which will lead to authentication failure
#dbms.security.ldap.referral=follow

# The timeout for establishing an LDAP connection. If a connection with the LDAP server cannot be
# established within the given time the attempt is aborted.
# A value of 0 means to use the network protocol's (i.e., TCP's) timeout value.
#dbms.security.ldap.connection_timeout=30s

# The timeout for an LDAP read request (i.e. search). If the LDAP server does not respond within
# the given time the request will be aborted. A value of 0 means wait for a response indefinitely.
#dbms.security.ldap.read_timeout=30s

#----------------------------------
# LDAP Authentication Configuration
#----------------------------------

# LDAP authentication mechanism. This is one of `simple` or a SASL mechanism supported by JNDI,
# for example `DIGEST-MD5`. `simple` is basic username
# and password authentication and SASL is used for more advanced mechanisms. See RFC 2251 LDAPv3
# documentation for more details.
#dbms.security.ldap.authentication.mechanism=simple

# LDAP user DN template. An LDAP object is referenced by its distinguished name (DN), and a user DN is
# an LDAP fully-qualified unique user identifier. This setting is used to generate an LDAP DN that
# conforms with the LDAP directory's schema from the user principal that is submitted with the
# authentication token when logging in.
# The special token {0} is a placeholder where the user principal will be substituted into the DN string.
#dbms.security.ldap.authentication.user_dn_template=uid={0},ou=users,dc=example,dc=com

# Determines if the result of authentication via the LDAP server should be cached or not.
# Caching is used to limit the number of LDAP requests that have to be made over the network
# for users that have already been authenticated successfully. A user can be authenticated against
# an existing cache entry (instead of via an LDAP server) as long as it is alive
# (see `dbms.security.auth_cache_ttl`).
# An important consequence of setting this to `true` is that
# Neo4j then needs to cache a hashed version of the credentials in order to perform credentials
# matching. This hashing is done using a cryptographic hash function together with a random salt.
# Preferably a conscious decision should be made if this method is considered acceptable by
# the security standards of the organization in which this Neo4j instance is deployed.
#dbms.security.ldap.authentication.cache_enabled=true

#----------------------------------
# LDAP Authorization Configuration
#----------------------------------
# Authorization is performed by searching the directory for the groups that
# the user is a member of, and then map those groups to Neo4j roles.

# Perform LDAP search for authorization info using a system account instead of the user's own account.
#
# If this is set to `false` (default), the search for group membership will be performed
# directly after authentication using the LDAP context bound with the user's own account.
# The mapped roles will be cached for the duration of `dbms.security.auth_cache_ttl`,
# and then expire, requiring re-authentication. To avoid frequently having to re-authenticate
# sessions you may want to set a relatively long auth cache expiration time together with this option.
# NOTE: This option will only work if the users are permitted to search for their
# own group membership attributes in the directory.
#
# If this is set to `true`, the search will be performed using a special system account user
# with read access to all the users in the directory.
# You need to specify the username and password using the settings
# `dbms.security.ldap.authorization.system_username` and
# `dbms.security.ldap.authorization.system_password` with this option.
# Note that this account only needs read access to the relevant parts of the LDAP directory
# and does not need to have access rights to Neo4j, or any other systems.
#dbms.security.ldap.authorization.use_system_account=false

# An LDAP system account username to use for authorization searches when
# `dbms.security.ldap.authorization.use_system_account` is `true`.
# Note that the `dbms.security.ldap.authentication.user_dn_template` will not be applied to this username,
# so you may have to specify a full DN.
#dbms.security.ldap.authorization.system_username

# An LDAP system account password to use for authorization searches when
# `dbms.security.ldap.authorization.use_system_account` is `true`.
#dbms.security.ldap.authorization.system_password

# The name of the base object or named context to search for user objects when LDAP authorization is enabled.
# A common case is that this matches the last part of `dbms.security.ldap.authentication.user_dn_template`.
#dbms.security.ldap.authorization.user_search_base=ou=users,dc=example,dc=com

# The LDAP search filter to search for a user principal when LDAP authorization is
# enabled. The filter should contain the placeholder token {0} which will be substituted for the
# user principal.
#dbms.security.ldap.authorization.user_search_filter=(&(objectClass=*)(uid={0}))

# A list of attribute names on a user object that contains groups to be used for mapping to roles
# when LDAP authorization is enabled. This setting is ignored when `dbms.ldap_authorization_nested_groups_enabled` is `true`.
#dbms.security.ldap.authorization.group_membership_attributes=memberOf

# This setting determines whether multiple LDAP search results will be processed (as is required for the lookup of nested groups).
# If set to `true` then instead of using attributes on the user object to determine group membership (as specified by
# `dbms.security.ldap.authorization.group_membership_attributes`), the `user` object will only be used to determine the user's
# Distinguished Name, which will subsequently be used with  `dbms.security.ldap.authorization.user_search_filter`
# in order to perform a nested group search. The Distinguished Names of the resultant group search results will be used to determine roles.
#dbms.security.ldap.authorization.nested_groups_enabled=false

# The search template which will be used to find the nested groups which the user is a member of.
# The filter should contain the placeholder token `{0}` which will be substituted with the user's
# Distinguished Name (which is found for the specified user principle using `dbms.security.ldap.authorization.user_search_filter`).
# The default value specifies Active Directory's LDAP_MATCHING_RULE_IN_CHAIN (aka 1.2.840.113556.1.4.1941) implementation
# which will walk the ancestry of group membership for the specified user.
#dbms.security.ldap.authorization.nested_groups_search_filter=(&(objectclass=group)(member:1.2.840.113556.1.4.1941:={0}))

# An authorization mapping from LDAP group names to Neo4j role names.
# The map should be formatted as a semicolon separated list of key-value pairs, where the
# key is the LDAP group name and the value is a comma separated list of corresponding role names.
# For example: group1=role1;group2=role2;group3=role3,role4,role5
#
# You could also use whitespaces and quotes around group names to make this mapping more readable,
# for example: dbms.security.ldap.authorization.group_to_role_mapping=\
#          "cn=Neo4j Read Only,cn=users,dc=example,dc=com"      = reader;    \
#          "cn=Neo4j Read-Write,cn=users,dc=example,dc=com"     = publisher; \
#          "cn=Neo4j Schema Manager,cn=users,dc=example,dc=com" = architect; \
#          "cn=Neo4j Administrator,cn=users,dc=example,dc=com"  = admin
#dbms.security.ldap.authorization.group_to_role_mapping

#*****************************************************************
# OpenID Connect configuration
#*****************************************************************

# The display name for the provider. This will be displayed in clients such as Neo4j Browser and Bloom.
#dbms.security.oidc..display_name

# The OIDC auth_flow for clients such as Neo4j Browser and Bloom to use. Supported values are 'pkce' and 'implicit'
#dbms.security.oidc..auth_flow=pkce

# The OpenID Connect Discovery URL for the provider
#dbms.security.oidc..well_known_discovery_uri

# URL of the provider's Authorization Endpoint
#dbms.security.oidc..auth_endpoint

# Parameters to use with the Authorization Endpoint.
#dbms.security.oidc..auth_params

# URL of the provider's OAuth 2.0 Token Endpoint
#dbms.security.oidc..token_endpoint

# Parameters to use with the Token Endpoint.
#dbms.security.oidc..token_params

# URL of the provider's JSON Web Key Set
#dbms.security.oidc..jwks_uri

# URL of the provider's UserInfo Endpoint
#dbms.security.oidc..user_info_uri

# URL that the provider asserts as its issuer identifier. This will be checked against the iss claim in the token
#dbms.security.oidc..issuer

# The expected value for the `aud` claim
#dbms.security.oidc..audience

# The client_id of this client as issued by the provider.
#dbms.security.oidc..client_id

# Whether to fetch the groups claim from the user info endpoint on the identity provider. The default is false, read it from the token.
#dbms.security.oidc..get_groups_from_user_info=false

# Whether to fetch the username claim from the user info endpoint on the identity provider. The default is false, read it from the token.
#dbms.security.oidc..get_username_from_user_info=false

# The claim to use for the database username.
#dbms.security.oidc..claims.username=sub

# The claim to use for the database roles.
#dbms.security.oidc..claims.groups

# General parameters to use with the Identity Provider.
#dbms.security.oidc..params

# General config to use with the Identity Provider.
#dbms.security.oidc..config

# An authorization mapping from identity provider group names to Neo4j role names. See dbms.security.ldap.authorization.group_to_role_mapping above
# for the format.
#dbms.security.oidc..authorization.group_to_role_mapping

#*****************************************************************
# Miscellaneous configuration
#*****************************************************************

# Compresses the metric archive files.
server.metrics.csv.rotation.compression=zip

# Determines if Cypher will allow using file URLs when loading data using
# `LOAD CSV`. Setting this value to `false` will cause Neo4j to fail `LOAD CSV`
# clauses that load data from the file system.
#dbms.security.allow_csv_import_from_file_urls=true


# Value of the Access-Control-Allow-Origin header sent over any HTTP or HTTPS
# connector. This defaults to '*', which allows broadest compatibility. Note
# that any URI provided here limits HTTP/HTTPS access to that URI only.
#dbms.security.http_access_control_allow_origin=*

# Value of the HTTP Strict-Transport-Security (HSTS) response header. This header
# tells browsers that a webpage should only be accessed using HTTPS instead of HTTP.
# It is attached to every HTTPS response. Setting is not set by default so
# 'Strict-Transport-Security' header is not sent. Value is expected to contain
# directives like 'max-age', 'includeSubDomains' and 'preload'.
#dbms.security.http_strict_transport_security

# Retention policy for transaction logs needed to perform recovery and backups.
#db.tx_log.rotation.retention_policy=2 days

# Limit the number of IOs the background checkpoint process will consume per second.
# This setting is advisory, is ignored in Neo4j Community Edition, and is followed to
# best effort in Enterprise Edition.
# An IO is in this case a 8 KiB (mostly sequential) write. Limiting the write IO in
# this way will leave more bandwidth in the IO subsystem to service random-read IOs,
# which is important for the response time of queries when the database cannot fit
# entirely in memory. The only drawback of this setting is that longer checkpoint times
# may lead to slightly longer recovery times in case of a database or system crash.
# A lower number means lower IO pressure, and consequently longer checkpoint times.
# Set this to -1 to disable the IOPS limit and remove the limitation entirely,
# this will let the checkpointer flush data as fast as the hardware will go.
# Removing the setting, or commenting it out, will set the default value of 600.
# db.checkpoint.iops.limit=600

# Whether or not any database on this instance are read_only by default.
# If false, individual databases may be marked as read_only using dbms.database.read_only.
# If true, individual databases may be marked as writable using dbms.databases.writable.
#dbms.databases.default_to_read_only=false

# Comma separated list of JAX-RS packages containing JAX-RS resources, one
# package name for each mountpoint. The listed package names will be loaded
# under the mountpoints specified. Uncomment this line to mount the
# org.neo4j.examples.server.unmanaged.HelloWorldResource.java from
# neo4j-server-examples under /examples/unmanaged, resulting in a final URL of
# http://localhost:7474/examples/unmanaged/helloworld/{nodeId}
#server.unmanaged_extension_classes=org.neo4j.examples.server.unmanaged=/examples/unmanaged

# A comma separated list of procedures and user defined functions that are allowed
# full access to the database through unsupported/insecure internal APIs.
dbms.security.procedures.unrestricted=jwt.security.*,apoc.*

# A comma separated list of procedures to be loaded by default.
# Leaving this unconfigured will load all procedures found.
#dbms.security.procedures.allowlist=apoc.coll.*,apoc.load.*,gds.*

# For how long should drivers cache the discovery data from
# the dbms.routing.getRoutingTable() procedure. Defaults to 300s.
#dbms.routing_ttl=300s

#********************************************************************
# JVM Parameters
#********************************************************************

# G1GC generally strikes a good balance between throughput and tail
# latency, without too much tuning.
server.jvm.additional=-XX:+UseG1GC

# Have common exceptions keep producing stack traces, so they can be
# debugged regardless of how often logs are rotated.
server.jvm.additional=-XX:-OmitStackTraceInFastThrow

# Make sure that `initmemory` is not only allocated, but committed to
# the process, before starting the database. This reduces memory
# fragmentation, increasing the effectiveness of transparent huge
# pages. It also reduces the possibility of seeing performance drop
# due to heap-growing GC events, where a decrease in available page
# cache leads to an increase in mean IO response time.
# Try reducing the heap memory, if this flag degrades performance.
server.jvm.additional=-XX:+AlwaysPreTouch

# Trust that non-static final fields are really final.
# This allows more optimizations and improves overall performance.
# NOTE: Disable this if you use embedded mode, or have extensions or dependencies that may use reflection or
# serialization to change the value of final fields!
server.jvm.additional=-XX:+UnlockExperimentalVMOptions
server.jvm.additional=-XX:+TrustFinalNonStaticFields

# Disable explicit garbage collection, which is occasionally invoked by the JDK itself.
server.jvm.additional=-XX:+DisableExplicitGC

# Allow Neo4j to use @Contended annotation
#server.jvm.additional=-XX:-RestrictContended

# Restrict size of cached JDK buffers to 1 KB
server.jvm.additional=-Djdk.nio.maxCachedBufferSize=1024

# More efficient buffer allocation in Netty by allowing direct no cleaner buffers.
server.jvm.additional=-Dio.netty.tryReflectionSetAccessible=true

# Exits JVM on the first occurrence of an out-of-memory error. Its preferable to restart VM in case of out of memory errors.
# server.jvm.additional=-XX:+ExitOnOutOfMemoryError

# Expand Diffie Hellman (DH) key size from default 1024 to 2048 for DH-RSA cipher suites used in server TLS handshakes.
# This is to protect the server from any potential passive eavesdropping.
server.jvm.additional=-Djdk.tls.ephemeralDHKeySize=2048

# This mitigates a DDoS vector.
server.jvm.additional=-Djdk.tls.rejectClientInitiatedRenegotiation=true

# Enable remote debugging
#server.jvm.additional=-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005

# This filter prevents deserialization of arbitrary objects via java object serialization, addressing potential vulnerabilities.
# By default this filter whitelists all neo4j classes, as well as classes from the hazelcast library and the java standard library.
# These defaults should only be modified by expert users!
# For more details (including filter syntax) see: https://openjdk.java.net/jeps/290
#server.jvm.additional=-Djdk.serialFilter=java.**;org.neo4j.**;com.neo4j.**;com.hazelcast.**;net.sf.ehcache.Element;com.sun.proxy.*;org.openjdk.jmh.**;!*

# Increase the default flight recorder stack sampling depth from 64 to 256, to avoid truncating frames when profiling.
server.jvm.additional=-XX:FlightRecorderOptions=stackdepth=256

# Allow profilers to sample between safepoints. Without this, sampling profilers may produce less accurate results.
server.jvm.additional=-XX:+UnlockDiagnosticVMOptions
server.jvm.additional=-XX:+DebugNonSafepoints

# Open modules for neo4j to allow internal access
server.jvm.additional=--add-opens=java.base/java.nio=ALL-UNNAMED
server.jvm.additional=--add-opens=java.base/java.io=ALL-UNNAMED
server.jvm.additional=--add-opens=java.base/sun.nio.ch=ALL-UNNAMED

# Disable logging JMX endpoint.
server.jvm.additional=-Dlog4j2.disable.jmx=true

# Limit JVM metaspace and code cache to allow garbage collection. Used by cypher for code generation and may grow indefinitely unless constrained.
# Useful for memory constrained environments
#server.jvm.additional=-XX:MaxMetaspaceSize=1024m
#server.jvm.additional=-XX:ReservedCodeCacheSize=512m

#********************************************************************
# Wrapper Windows NT/2000/XP Service Properties
#********************************************************************
# WARNING - Do not modify any of these properties when an application
#  using this configuration file has been installed as a service.
#  Please uninstall the service before modifying this section.  The
#  service can then be reinstalled.

# Name of the service
server.windows_service_name=neo4j

#********************************************************************
# Other Neo4j system properties
#********************************************************************

dbms.memory.heap.initial_size=512m
dbms.memory.heap.max_size=1G
dbms.memory.pagecache.size=512m
dbms.windows_service_name=neo4j-relate-dbms-1560cf9a-d318-43e7-86f7-d36267f09a1a
dbms.jvm.additional=-Dlog4j2.formatMsgNoLookups=true

apoc.import.file.enabled=true

该回答引用GPTᴼᴾᴱᴺᴬᴵ
根据您提供的日志信息,可以看出在启动时出现了一些警告,但并不是导致 DBMS 无法启动的原因。错误信息表明,Neo4j Bolt 协议的某些请求在处理时失败了,从而导致连接被终止。这可能与您尝试更改的 apoc.import.file.enabled 相关。

在更改配置文件后,您需要确保更改了正确的文件并使用正确的格式。如果出现错误,可能会导致 Neo4j 无法正确加载配置文件,从而导致启动失败。您可以检查配置文件中的语法错误并确保所有路径都正确指向。同时,您也可以尝试恢复默认的配置文件并再次尝试更改配置。

如果您确定更改配置文件是引起问题的原因,请使用默认的配置文件重新启动 Neo4j,然后再尝试应用更改。您可以通过编辑 neo4j.conf 文件来更改配置。例如,您可以在 neo4j.conf 中添加以下行来启用 APOC:

apoc.import.file.enabled=true
apoc.import.file.use_neo4j_config=false


然后,保存文件并重新启动 Neo4j。请注意,更改配置文件可能需要重启 Neo4j 才能生效。

该回答引用ChatGPT

你的Neo4j服务器启动失败,可能是由于设置了一个不正确的配置项导致的。错误消息中提到了一个未被识别的设置项 "apoc.import.file.enabled",这意味着该设置项可能不存在或被拼写错误。

为了解决这个问题,你可以尝试以下步骤:

1、进入Neo4j的配置文件夹,通常在 "neo4j/conf/" 目录下。

2、打开 "neo4j.conf" 配置文件,查找 "apoc.import.file.enabled" 设置项。如果它不存在,你可以手动添加它。确保拼写正确,并将其设置为 "true"。

保存并关闭 "neo4j.conf" 配置文件,并重新启动Neo4j服务器。如果仍然无法启动,请检查其他设置项是否正确,并确保所有必需的插件已正确安装和配置。

参考GPT和自己的思路,这个错误信息不全,很难判断具体问题所在,你能不能把报错信息复制出来一下,可能会更准确的回到你的问题,但根据您提供的信息,可以尝试以下解决方案:

1 检查设置的正确性:确认在配置文件中设置的 apoc.import.file.enabled=true 是否正确,并且没有其他语法错误。

2 查看日志文件:检查 Neo4j 日志文件,尤其是启动日志,看是否有其他错误信息提示,例如文件权限问题、配置文件冲突等等。

3 检查插件版本:确认安装的 apoc 插件版本与当前 Neo4j 版本兼容。可以查看 apoc 插件的文档或官方网站上的兼容性列表。

4 禁用插件:尝试禁用 apoc 插件,看是否能够正常启动。可以在配置文件中注释掉相关配置,或者在启动时添加参数 --no-plugins。

5 重新安装:如果以上方法无法解决问题,可以尝试重新安装 Neo4j 和 apoc 插件。确保安装过程中没有出现任何错误,并且在重新设置配置文件之前,先尝试启动 Neo4j 确认安装是否成功。