The Wayback Machine - https://web.archive.org/web/20230128034822/https://github.com/scikit-learn/scikit-learn/pull/23442
Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FIX Enables label_ranking_average_precision_score to support sparse y_true #23442

Conversation

ShehanAT
Copy link
Contributor

Reference Issues/PRs

Fixes #22575

What does this implement/fix? Explain your changes.

Since the label_ranking_average_precision_score metric now uses the check_array function(Link to relevant commit) in favour of the previously used check_arrays function(Link to relevant commit), passing a None value for the sparse_format parameter now raises a TypeError: A sparse matrix was passed, but dense data is required. Use X.toarray() to convert to a dense numpy array error.

This PR changes the value of the sparse_format parameter so that whenever it is passed a None value, it is automatically assigned a value of ["csr", "csc"]. Doing so prevents the aforementioned error from being raised.

I have added a corresponding test for this fix in the following file: sklearn/metrics/tests/test_ranking.py, # def test_label_ranking_avg_precision_score_should_allow_csr_matrix_for_y_true_input()

I have ran the pytest sklearn/metrics/tests/test_ranking.py command for which all the tests, including the newly added one, have passed.

@ShehanAT ShehanAT changed the title Label ranking average precision score csr matrix fix shehan at Label_ranking_average_precision_score Does Not Allow csr_matrix for y_true Input Fix, Regarding Issue #22575 May 22, 2022
@ShehanAT ShehanAT changed the title Label_ranking_average_precision_score Does Not Allow csr_matrix for y_true Input Fix, Regarding Issue #22575 [MRG] Label_ranking_average_precision_score Does Not Allow csr_matrix for y_true Input Fix, Regarding Issue #22575 May 22, 2022
Copy link
Member

@thomasjpfan thomasjpfan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the PR!

if accept_sparse is False:
accept_sparse = ["csr", "csc"]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can not do this because it changes the behavior of accept_sparse in check_array and is backward incompatible, which is causing many CI failures.

To fix this issue, we need to adjust label_ranking_average_precision_score itself to accept y_true as sparse, by setting accept_sparse="csr" here:

y_true = check_array(y_true, ensure_2d=False)

From looking at the code, it maybe a little difficult to do since type_of_target does not really work with sparse targets. The quick fix is to adjust label_ranking_average_precision_score as follows:

    y_true = check_array(y_true, ensure_2d=False, accept_sparse="csr")
    
    ...
    if not issparse(y_true):
        # Handle badly formatted array and the degenerate case with one label
        y_type = type_of_target(y_true, input_name="y_true")
        if y_type != "multilabel-indicator" and not (
            y_type == "binary" and y_true.ndim == 2
        ):
            raise ValueError("{0} format is not supported".format(y_type))

        y_true = csr_matrix(y_true)

Copy link
Contributor Author

@ShehanAT ShehanAT May 23, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the feedback. I have included this change by adding the if not issparse(y_true): line in a subsequent commit now. Let me know if I'm missing anything.

sklearn/metrics/tests/test_ranking.py Show resolved Hide resolved
sklearn/metrics/tests/test_ranking.py Show resolved Hide resolved
y_true = csr_matrix(np.array([[1, 0, 0], [0, 0, 1]]))
y_score = np.array([[0.5, 0.9, 0.6], [0, 0, 1]])
result = label_ranking_average_precision_score(y_true, y_score)
assert result == pytest.approx(0.6666666666666666)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: This should work and looks a little bit better:

Suggested change
assert result == pytest.approx(0.6666666666666666)
assert result == pytest.approx(2/3)

Edit: Changed 1/3 to 2/3

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The 1/3 value should be 2/3 right?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea it's 2/3, I mistyped.

@@ -811,6 +811,7 @@ def check_array(

if sp.issparse(array):
_ensure_no_complex_data(array)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This diff can be reverted.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done!

Copy link
Member

@thomasjpfan thomasjpfan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Merging with main should fix the ci failure.

Please add an entry to the change log at doc/whats_new/v1.2.rst with tag |Fix|. Like the other entries there, please reference this pull request with :pr: and credit yourself (and other contributors if applicable) with :user:.

@@ -37,6 +37,7 @@
from ..preprocessing import label_binarize
from ..utils._encode import _encode, _unique


Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This diff is not needed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, removed now

if y_type != "multilabel-indicator" and not (
y_type == "binary" and y_true.ndim == 2
):
raise ValueError("{0} format is not supported".format(y_type))

y_true = csr_matrix(y_true)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We only need to convert if y_true is dense:

Suggested change
y_true = csr_matrix(y_true)
y_true = csr_matrix(y_true)

check_array with accept_sparse="csr" will convert a sparse matrix to CSR if it needs to:

'csr', etc. If the input is sparse but not in the allowed format,
it will be converted to the first listed format. True allows the input

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, fixed now

@ShehanAT
Copy link
Contributor Author

Merging with main should fix the ci failure.

Please add an entry to the change log at doc/whats_new/v1.2.rst with tag |Fix|. Like the other entries there, please reference this pull request with :pr: and credit yourself (and other contributors if applicable) with :user:.

I have added an entry to doc/whats_new/v1.2.rst now. Let me know if I'm missing anything else.

@thomasjpfan thomasjpfan changed the title [MRG] Label_ranking_average_precision_score Does Not Allow csr_matrix for y_true Input Fix, Regarding Issue #22575 FIX Enables label_ranking_average_precision_score to support sparse y_true May 24, 2022
Copy link
Member

@thomasjpfan thomasjpfan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor comment, otherwise LGTM.

I'm okay with this temporary solution, because type_of_target does not support sparse data yet.

doc/whats_new/v1.2.rst Outdated Show resolved Hide resolved
sklearn/metrics/tests/test_ranking.py Outdated Show resolved Hide resolved
ShehanAT and others added 2 commits May 24, 2022
@cmarmo
Copy link
Member

cmarmo commented Aug 15, 2022

Hi @ShehanAT, you already have one approval, do you mind synchronizing with upstream and fixing conflicts?
This will rerender the documentation and probably fix the failing builds, making easier to have new reviewers. Thanks!

@ShehanAT
Copy link
Contributor Author

ShehanAT commented Aug 16, 2022

Hi @ShehanAT, you already have one approval, do you mind synchronizing with upstream and fixing conflicts? This will rerender the documentation and probably fix the failing builds, making easier to have new reviewers. Thanks!

Sure, I have merged and fixed conflicts involving the doc/whats_new/v1.2.rst file now.

Is it okay that all the recent upstream main commits got lumped into this PR or do you want me to create a new PR with the same changes?

@cmarmo
Copy link
Member

cmarmo commented Aug 16, 2022

Thanks @ShehanAT. Upstream commits should not be in the history of your PR. A new PR is not needed.
Could you please synchronize your main then rebase your branch against main?

$ git rebase main

Note that some relics of conflicts (>>>> HEAD) are still present: they should be removed before pushing, otherwise the build will not complete.

Let me know if you need any help in the process.

@ShehanAT
Copy link
Contributor Author

Thanks @ShehanAT. Upstream commits should not be in the history of your PR. A new PR is not needed. Could you please synchronize your main then rebase your branch against main?

$ git rebase main

Note that some relics of conflicts (>>>> HEAD) are still present: they should be removed before pushing, otherwise the build will not complete.

Let me know if you need any help in the process.

Hi @cmarmo. Thanks for the guidance but the rebasing with main is still being identified as new commits for some reason. What you do think is the most convenient way to resolve this issue?

@cmarmo
Copy link
Member

cmarmo commented Aug 18, 2022

Hi @ShehanAT the last commit before the synchronization is a29b92d.
I suggest you to reset the branch at this commit.

$ git reset --hard a29b92da1b1501b27ad147997b24dda28ea5b792

Then redo the synchronization

$ git fetch upstream
$ git merge upstream/main 

You will have conflicts in the v1.2.rst file and only there (I have just done the workflow myself locally): once the conflicts fixed

$ git add doc/whats_new/v1.2.rst
$ git commit -m "Fix conflicts."

At the end of the process, you might want to check if everything went fine with

$ git diff upstream/main --compact-summary

which should output (at the time of me writing this)

 doc/whats_new/v1.2.rst                |  4 ++++
 sklearn/metrics/_ranking.py           | 18 ++++++++++--------
 sklearn/metrics/tests/test_ranking.py |  9 +++++++++
 3 files changed, 23 insertions(+), 8 deletions(-)

The three files you modified if I am not mistaken.
To push run:

$ git push --force <your_remote> <your_branch>

@ShehanAT ShehanAT force-pushed the label-ranking-average-precision-score-csr-matrix-fix-ShehanAT branch from 499fd06 to b383b9d Compare Aug 18, 2022
@ShehanAT
Copy link
Contributor Author

Hi @cmarmo, thanks for the instructions.

My main branch is synchronized and my feature branch is rebased against the main branch now.

Let me know if any further changes are needed.

doc/whats_new/v1.2.rst Outdated Show resolved Hide resolved
@cmarmo
Copy link
Member

cmarmo commented Aug 19, 2022

All green and up-to-date! Thanks @ShehanAT. Hoping that a second review will arrive soon... 🤞

sklearn/metrics/_ranking.py Outdated Show resolved Hide resolved
Co-authored-by: Christian Lorentzen <[email protected]>
Copy link
Member

@lorentzenchr lorentzenchr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM
I guess we can rely on type_of_target to be tested.

@lorentzenchr lorentzenchr merged commit 272c197 into scikit-learn:main Sep 5, 2022
32 checks passed
glemaitre pushed a commit to glemaitre/scikit-learn that referenced this pull request Sep 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

label_ranking_average_precision_score does not allow csr_matrix for y_true input
4 participants