有关bert运行的错误,如何解决?

最近在练习李宏毅的hw07,有关bert的。然后在跑第一次代码的时候,加载bert模型总是在显示权限错误,和OS加载模型错误,但是在autodl上却可以运行,我自己是在学校的服务器上面,有权限限定的,想问一下这种情况要怎么解决。

img


```python
[Errno 13] Permission denied: '/home/kaihua/.cache/huggingface/transformers/6cc404ca8136bc87bae0fb24f2259904943d776a6c5ddc26598bbdc319476f42.0f9bcd8314d841c06633e7b92b04509f1802c16796ee67b0f1177065739e24ae.lock'
---------------------------------------------------------------------------
PermissionError                           Traceback (most recent call last)
File ~/.local/lib/python3.9/site-packages/transformers/configuration_utils.py:457, in PretrainedConfig.get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
    455 try:
    456     # Load from URL or cache if already cached
--> 457     resolved_config_file = cached_path(
    458         config_file,
    459         cache_dir=cache_dir,
    460         force_download=force_download,
    461         proxies=proxies,
    462         resume_download=resume_download,
    463         local_files_only=local_files_only,
    464         use_auth_token=use_auth_token,
    465         user_agent=user_agent,
    466     )
    467     # Load config dict

File ~/.local/lib/python3.9/site-packages/transformers/file_utils.py:1165, in cached_path(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, use_auth_token, local_files_only)
   1163 if is_remote_url(url_or_filename):
   1164     # URL, so get it from the cache (downloading if necessary)
-> 1165     output_path = get_from_cache(
   1166         url_or_filename,
   1167         cache_dir=cache_dir,
   1168         force_download=force_download,
   1169         proxies=proxies,
   1170         resume_download=resume_download,
   1171         user_agent=user_agent,
   1172         use_auth_token=use_auth_token,
   1173         local_files_only=local_files_only,
   1174     )
   1175 elif os.path.exists(url_or_filename):
   1176     # File, and it exists.

File ~/.local/lib/python3.9/site-packages/transformers/file_utils.py:1399, in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, use_auth_token, local_files_only)
   1398 lock_path = cache_path + ".lock"
-> 1399 with FileLock(lock_path):
   1400 
   1401     # If the download just completed while the lock was activated.
   1402     if os.path.exists(cache_path) and not force_download:
   1403         # Even if returning early like here, the lock will be released.

File ~/.local/lib/python3.9/site-packages/filelock/_api.py:220, in BaseFileLock.__enter__(self)
    215 """
    216 Acquire the lock.
    217 
    218 :return: the lock object
    219 """
--> 220 self.acquire()
    221 return self

File ~/.local/lib/python3.9/site-packages/filelock/_api.py:173, in BaseFileLock.acquire(self, timeout, poll_interval, poll_intervall, blocking)
    172         _LOGGER.debug("Attempting to acquire lock %s on %s", lock_id, lock_filename)
--> 173         self._acquire()
    175 if self.is_locked:

File ~/.local/lib/python3.9/site-packages/filelock/_unix.py:35, in UnixFileLock._acquire(self)
     34 open_mode = os.O_RDWR | os.O_CREAT | os.O_TRUNC
---> 35 fd = os.open(self._lock_file, open_mode)
     36 try:

PermissionError: [Errno 13] Permission denied: '/home/kaihua/.cache/huggingface/transformers/6cc404ca8136bc87bae0fb24f2259904943d776a6c5ddc26598bbdc319476f42.0f9bcd8314d841c06633e7b92b04509f1802c16796ee67b0f1177065739e24ae.lock'

During handling of the above exception, another exception occurred:

OSError                                   Traceback (most recent call last)
Input In [5], in 1>()
----> 1 model = BertForQuestionAnswering.from_pretrained("bert-base-chinese").to(device)
      2 tokenizer = BertTokenizerFast.from_pretrained("bert-base-chinese")

File ~/.local/lib/python3.9/site-packages/transformers/modeling_utils.py:975, in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
    973 if not isinstance(config, PretrainedConfig):
    974     config_path = config if config is not None else pretrained_model_name_or_path
--> 975     config, model_kwargs = cls.config_class.from_pretrained(
    976         config_path,
    977         *model_args,
    978         cache_dir=cache_dir,
    979         return_unused_kwargs=True,
    980         force_download=force_download,
    981         resume_download=resume_download,
    982         proxies=proxies,
    983         local_files_only=local_files_only,
    984         use_auth_token=use_auth_token,
    985         revision=revision,
    986         _from_auto=from_auto_class,
    987         _from_pipeline=from_pipeline,
    988         **kwargs,
    989     )
    990 else:
    991     model_kwargs = kwargs

File ~/.local/lib/python3.9/site-packages/transformers/configuration_utils.py:401, in PretrainedConfig.from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
    331 @classmethod
    332 def from_pretrained(cls, pretrained_model_name_or_path: Union[str, os.PathLike], **kwargs) -> "PretrainedConfig":
    333     r"""
    334     Instantiate a :class:`~transformers.PretrainedConfig` (or a derived class) from a pretrained model
    335     configuration.
   (...)
    399 
    400     """
--> 401     config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
    402     if config_dict.get("model_type", False) and hasattr(cls, "model_type"):
    403         assert (
    404             config_dict["model_type"] == cls.model_type
    405         ), f"You tried to initiate a model of type '{cls.model_type}' with a pretrained model of type '{config_dict['model_type']}'"

File ~/.local/lib/python3.9/site-packages/transformers/configuration_utils.py:477, in PretrainedConfig.get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
    471     logger.error(err)
    472     msg = (
    473         f"Can't load config for '{pretrained_model_name_or_path}'. Make sure that:\n\n"
    474         f"- '{pretrained_model_name_or_path}' is a correct model identifier listed on 'https://huggingface.co/models'\n\n"
    475         f"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing a {CONFIG_NAME} file\n\n"
    476     )
--> 477     raise EnvironmentError(msg)
    479 except json.JSONDecodeError:
    480     msg = (
    481         f"Couldn't reach server at '{config_file}' to download configuration file or "
    482         "configuration file is not a valid JSON file. "
    483         f"Please check network or file content here: {resolved_config_file}."
    484     )

OSError: Can't load config for 'bert-base-chinese'. Make sure that:

- 'bert-base-chinese' is a correct model identifier listed on 'https://huggingface.co/models'

- or 'bert-base-chinese' is the correct path to a directory containing a config.json file


```

望采纳!!!点击回答右侧采纳即可!!
这种情况可能是因为你在学校的服务器上没有足够的权限来加载BERT模型。这种情况可以通过以下几种方法来解决:

请求管理员提供权限:可以向你所在学校的IT部门或服务器管理员请求权限。

在本地使用模型:可以将BERT模型下载到本地进行训练和测试,然后再将结果上传到服务器上。

使用云服务:可以使用云服务商提供的计算资源来运行BERT模型。这样可以避免权限问题,并且可以更方便地管理资源。

使用Docker: 试试在Docker容器中加载模型

不同的解决方案对于不同的场景和需求是有所不同的,希望这些建议能够帮助你解决问题。

  • 给你找了一篇非常好的博客,你可以看看是否有帮助,链接:对Bert的理解

这个错误是由于在本地机器上没有足够的权限来创建和写入文件(/home/kaihua/.cache/huggingface/transformers/...)导致的。可以试试以下几种解决方法:

1、使用sudo来运行代码,以获得更高的权限。
2、将文件夹/home/kaihua/.cache/huggingface/transformers/的权限更改为自己的用户帐户有写入权限: chmod -R 755 /home/kaihua/.cache/huggingface/transformers/
3、在代码中设置cache_dir参数,将缓存目录更改为有写入权限的目录。
可以根据现在所在机器的实际权限来决定使用哪种方法。
仅供参考,望采纳,谢谢。

感觉这就是系统给到的权限不够啊,无法写入导致。