Skip to content

Commit f2b4b94

Browse files
authored
Fix typos in multiple files (#4032)
1 parent 9660049 commit f2b4b94

File tree

44 files changed

+111
-324
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

44 files changed

+111
-324
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -173,7 +173,7 @@ Via the easy-to-use, efficient, flexible and scalable implementation, our vision
173173
- 🏆 **Streaming ASR and TTS System**: we provide production ready streaming asr and streaming tts system.
174174
- 💯 **Rule-based Chinese frontend**: our frontend contains Text Normalization and Grapheme-to-Phoneme (G2P, including Polyphone and Tone Sandhi). Moreover, we use self-defined linguistic rules to adapt Chinese context.
175175
- 📦 **Varieties of Functions that Vitalize both Industrial and Academia**:
176-
- 🛎️ *Implementation of critical audio tasks*: this toolkit contains audio functions like Automatic Speech Recognition, Text-to-Speech Synthesis, Speaker Verfication, KeyWord Spotting, Audio Classification, and Speech Translation, etc.
176+
- 🛎️ *Implementation of critical audio tasks*: this toolkit contains audio functions like Automatic Speech Recognition, Text-to-Speech Synthesis, Speaker Verification, KeyWord Spotting, Audio Classification, and Speech Translation, etc.
177177
- 🔬 *Integration of mainstream models and datasets*: the toolkit implements modules that participate in the whole pipeline of the speech tasks, and uses mainstream datasets like LibriSpeech, LJSpeech, AIShell, CSMSC, etc. See also [model list](#model-list) for more details.
178178
- 🧩 *Cascaded models application*: as an extension of the typical traditional audio tasks, we combine the workflows of the aforementioned tasks with other fields like Natural language processing (NLP) and Computer Vision (CV).
179179

@@ -1025,7 +1025,7 @@ You are warmly welcome to submit questions in [discussions](https://github.com/P
10251025
- Many thanks to [vpegasus](https://github.com/vpegasus)/[xuesebot](https://github.com/vpegasus/xuesebot) for developing a rasa chatbot,which is able to speak and listen thanks to PaddleSpeech.
10261026
- Many thanks to [chenkui164](https://github.com/chenkui164)/[FastASR](https://github.com/chenkui164/FastASR) for the C++ inference implementation of PaddleSpeech ASR.
10271027
- Many thanks to [heyudage](https://github.com/heyudage)/[VoiceTyping](https://github.com/heyudage/VoiceTyping) for the real-time voice typing tool implementation of PaddleSpeech ASR streaming services.
1028-
- Many thanks to [EscaticZheng](https://github.com/EscaticZheng)/[ps3.9wheel-install](https://github.com/EscaticZheng/ps3.9wheel-install) for the python3.9 prebuilt wheel for PaddleSpeech installation in Windows without Viusal Studio.
1028+
- Many thanks to [EscaticZheng](https://github.com/EscaticZheng)/[ps3.9wheel-install](https://github.com/EscaticZheng/ps3.9wheel-install) for the python3.9 prebuilt wheel for PaddleSpeech installation in Windows without Visual Studio.
10291029
Besides, PaddleSpeech depends on a lot of open source repositories. See [references](./docs/source/reference.md) for more information.
10301030
- Many thanks to [chinobing](https://github.com/chinobing)/[FastAPI-PaddleSpeech-Audio-To-Text](https://github.com/chinobing/FastAPI-PaddleSpeech-Audio-To-Text) for converting audio to text based on FastAPI and PaddleSpeech.
10311031
- Many thanks to [MistEO](https://github.com/MistEO)/[Pallas-Bot](https://github.com/MistEO/Pallas-Bot) for QQ bot based on PaddleSpeech TTS.

audio/paddleaudio/datasets/dataset.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ def __init__(self,
4343
sample_rate: int=None,
4444
**kwargs):
4545
"""
46-
Ags:
46+
Args:
4747
files (:obj:`List[str]`): A list of absolute path of audio files.
4848
labels (:obj:`List[int]`): Labels of audio files.
4949
feat_type (:obj:`str`, `optional`, defaults to `raw`):

audio/paddleaudio/datasets/esc50.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -111,7 +111,7 @@ def __init__(self,
111111
feat_type: str='raw',
112112
**kwargs):
113113
"""
114-
Ags:
114+
Args:
115115
mode (:obj:`str`, `optional`, defaults to `train`):
116116
It identifies the dataset mode (train or dev).
117117
split (:obj:`int`, `optional`, defaults to 1):

audio/paddleaudio/datasets/gtzan.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ def __init__(self,
5757
feat_type='raw',
5858
**kwargs):
5959
"""
60-
Ags:
60+
Args:
6161
mode (:obj:`str`, `optional`, defaults to `train`):
6262
It identifies the dataset mode (train or dev).
6363
seed (:obj:`int`, `optional`, defaults to 0):

audio/paddleaudio/datasets/tess.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ def __init__(self,
6666
feat_type='raw',
6767
**kwargs):
6868
"""
69-
Ags:
69+
Args:
7070
mode (:obj:`str`, `optional`, defaults to `train`):
7171
It identifies the dataset mode (train or dev).
7272
seed (:obj:`int`, `optional`, defaults to 0):

audio/paddleaudio/datasets/urban_sound.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ def __init__(self,
6262
super(UrbanSound8K, self).__init__(
6363
files=files, labels=labels, feat_type=feat_type, **kwargs)
6464
"""
65-
Ags:
65+
Args:
6666
mode (:obj:`str`, `optional`, defaults to `train`):
6767
It identifies the dataset mode (train or dev).
6868
split (:obj:`int`, `optional`, defaults to 1):

audio/tests/backends/sox_io/save_test.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ def assert_save_consistency(
4141
test_mode: str="path", ):
4242
"""`save` function produces file that is comparable with `sox` command
4343
44-
To compare that the file produced by `save` function agains the file produced by
44+
To compare that the file produced by `save` function against the file produced by
4545
the equivalent `sox` command, we need to load both files.
4646
But there are many formats that cannot be opened with common Python modules (like
4747
SciPy).

demos/TTSAndroid/README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88

99
### 环境准备
1010

11-
1. 在本地环境安装好 Android Studio 工具,详细安装方法请见 [Android Stuido 官网](https://developer.android.com/studio)
11+
1. 在本地环境安装好 Android Studio 工具,详细安装方法请见 [Android Studio 官网](https://developer.android.com/studio)
1212
2. 准备一部 Android 手机,并开启 USB 调试模式。开启方法: `手机设置 -> 查找开发者选项 -> 打开开发者选项和 USB 调试模式`
1313

1414
**注意**
@@ -20,10 +20,10 @@
2020
2. 手机连接电脑,打开 USB 调试和文件传输模式,并在 Android Studio 上连接自己的手机设备(手机需要开启允许从 USB 安装软件权限)。
2121

2222
**注意:**
23-
>1. 如果您在导入项目、编译或者运行过程中遇到 NDK 配置错误的提示,请打开 `File > Project Structure > SDK Location`,修改 `Andriod NDK location` 为您本机配置的 NDK 所在路径。
24-
>2. 如果您是通过 Andriod Studio 的 SDK Tools 下载的 NDK (见本章节"环境准备"),可以直接点击下拉框选择默认路径。
23+
>1. 如果您在导入项目、编译或者运行过程中遇到 NDK 配置错误的提示,请打开 `File > Project Structure > SDK Location`,修改 `Android NDK location` 为您本机配置的 NDK 所在路径。
24+
>2. 如果您是通过 Android Studio 的 SDK Tools 下载的 NDK (见本章节"环境准备"),可以直接点击下拉框选择默认路径。
2525
>3. 还有一种 NDK 配置方法,你可以在 `TTSAndroid/local.properties` 文件中手动添加 NDK 路径配置 `nkd.dir=/root/android-ndk-r20b`
26-
>4. 如果以上步骤仍旧无法解决 NDK 配置错误,请尝试根据 Andriod Studio 官方文档中的[更新 Android Gradle 插件](https://developer.android.com/studio/releases/gradle-plugin?hl=zh-cn#updating-plugin)章节,尝试更新 Android Gradle plugin 版本。
26+
>4. 如果以上步骤仍旧无法解决 NDK 配置错误,请尝试根据 Android Studio 官方文档中的[更新 Android Gradle 插件](https://developer.android.com/studio/releases/gradle-plugin?hl=zh-cn#updating-plugin)章节,尝试更新 Android Gradle plugin 版本。
2727
2828
3. 点击 Run 按钮,自动编译 APP 并安装到手机。(该过程会自动下载 Paddle Lite 预测库和模型,需要联网)
2929
成功后效果如下:

demos/audio_searching/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -217,7 +217,7 @@ Then to start the system server, and it provides HTTP backend services.
217217
- memory:132G
218218
219219
dataset:
220-
- CN-Celeb, train size 650,000, test size 10,000, dimention 192, distance L2
220+
- CN-Celeb, train size 650,000, test size 10,000, dimension 192, distance L2
221221
222222
recall and elapsed time statistics are shown in the following figure:
223223
@@ -226,7 +226,7 @@ recall and elapsed time statistics are shown in the following figure:
226226
227227
The retrieval framework based on Milvus takes about 2.9 milliseconds to retrieve on the premise of 90% recall rate, and it takes about 500 milliseconds for feature extraction (testing audio takes about 5 seconds), that is, a single audio test takes about 503 milliseconds in total, which can meet most application scenarios.
228228
229-
* compute embeding takes 500 ms
229+
* compute embedding takes 500 ms
230230
* retrieval with cosine takes 2.9 ms
231231
* total takes 503 ms
232232

demos/speech_server/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ Currently the engine type supports two forms: python and inference (Paddle Infer
4242
paddlespeech_server start --help
4343
```
4444
Arguments:
45-
- `config_file`: yaml file of the app, defalut: ./conf/application.yaml
45+
- `config_file`: yaml file of the app, default: ./conf/application.yaml
4646
- `log_file`: log file. Default: ./log/paddlespeech.log
4747

4848
Output:

0 commit comments

Comments
 (0)