The Wayback Machine - https://web.archive.org/web/20210202202657/https://github.com/microsoft/TypeScript/issues/31658
Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide full import statement text from auto-import suggestions #31658

Open
compojoom opened this issue May 26, 2019 · 44 comments
Open

Provide full import statement text from auto-import suggestions #31658

compojoom opened this issue May 26, 2019 · 44 comments

Comments

@compojoom
Copy link

@compojoom compojoom commented May 26, 2019

Edit from @DanielRosenwasser

Given

import { someThi/**/

or

import { someThi/**/ }

The suggestion is to look for auto-imports to and to fully complete the paths with

import { someThing } from "./some/path"

  • VSCode Version: Version 1.35.0-insider (1.35.0-insider)
  • OS Version: macos 10.14.5

Steps to Reproduce:

  1. I'm working on a react project. I have a file that does export const Buttons
    2.In another file I do import { Button... and I expect auto-complete to display Buttons and hitting enter to complete the path.

But intelliSense just shows the text Buttons (with abc in front of it) and hitting enter doesn't add the path to the import.

If I try to directly import the component from a function - then auto-import correctly adds the Component to the top of the file with an import {Button} from 'correct_path'

Does this issue occur when all extensions are disabled?: Yes

@mjbvz
Copy link
Contributor

@mjbvz mjbvz commented May 28, 2019

Can you please share some example code or a small example project

Also, when you are typing out the import, is the entire line just import { without any import path yet?

@compojoom
Copy link
Author

@compojoom compojoom commented May 29, 2019

here I made a gif for you:
2019-05-29 11 54 25

I have a something.js - in that file I export the const something

in index.js I try to import { some -> and I was hoping that autocomplete will kick in and complete the line. That doesn't happen.

If I go down and type something -> intellisense suggest something -> I hit it and the import on the top of the file is inserted...

And another funny thing is that - if I use flowjs whenever I define a type and intelliSense suggest it -> instead of importing on the top, the import happens inline? I guess this is another issue, but just wanted to mention it in case it is related.

@mjbvz mjbvz transferred this issue from microsoft/vscode May 29, 2019
@mjbvz mjbvz changed the title suggestion doesn't import path Autoimports while typing import May 29, 2019
@mjbvz
Copy link
Contributor

@mjbvz mjbvz commented May 29, 2019

This is a feature request. The requested behavior is to support auto imports while typing out the import itself, such as:

import { ab|

Could show a completion for an auto import for abc which would complete the line to:

import { abc } './abc'
@mjbvz mjbvz removed their assignment May 29, 2019
@compojoom
Copy link
Author

@compojoom compojoom commented May 29, 2019

Ok. That's standard with intelliJ so I expected it with vscode as well. Sorry then.

the completion should be

import { abc } from './abc'

Since we are talking about import features.
When I try to import get from 'lodash-es' - the generated import is
import {get} from '../../../../lodash-es'
Is there a way for it to be just import {...} from 'node_module' ?

@jasonwilliams
Copy link

@jasonwilliams jasonwilliams commented Feb 19, 2020

Agree with @compojoom
I was surprised that this didn’t work by design, it’s pretty standard to have import suggestions in other editors, I’ve linked my issue as I raised the same thing in VSCode.

@jasonwilliams
Copy link

@jasonwilliams jasonwilliams commented Feb 19, 2020

  • VSCode Version: 1.42.1
  • OS Version: MacOS 10.14.6

Steps to Reproduce:

  1. clone https://github.com/jasonwilliams/nextjs9-typescript-server-vscode-example
  2. Try to import test.tsx into page pages/index.tsx

no_auto_complete

TS Server trace

It does work if i just type it out in the document without being inside of import braces.

Looks like the difference is a competionInfo request is sent to the typescript server when i type in the document and i get a response with suggestions, but inside of import i get

[Trace  - 14:23:48.406] <semantic> Response received: completionInfo (1416). Request took 5 ms. Success: false . Message: No content available.

Typing inside import

Screenshot 2020-02-17 at 14 28 45

Typing inside document

Screenshot 2020-02-17 at 14 35 27

@RyanCavanaugh does the above help?

@jasonwilliams
Copy link

@jasonwilliams jasonwilliams commented Aug 27, 2020

Can you please share some example code or a small example project

Also, when you are typing out the import, is the entire line just import { without any import path yet?

@mjbvz comment above

@DanielRosenwasser DanielRosenwasser changed the title Autoimports while typing import Provide full import statement text from auto-import suggestions Sep 14, 2020
@DanielRosenwasser DanielRosenwasser added this to the Backlog milestone Sep 14, 2020
@DanielRosenwasser
Copy link
Member

@DanielRosenwasser DanielRosenwasser commented Sep 14, 2020

Spoke with @andrewbranch about this a little bit. We think this can do a lot to alleviate the ECMAScript import order syntax that a lot of users are unhappy with.

Here's some rules for imports that I'm currently thinking.

  1. If there is an import path that resolves to a module, only provide suggestions to that import path (today's behavior)
  2. If there is an import path that resolves to a file that is not a module, provide no completions (today's behavior)
  3. If there is an import path that doesn't resolve to anything, or there is no import path, then
  • for named imports (import { /**/, import { Yadd/**/, import { Yadd/**/, Stuff } from)
    • complete the identifier
    • insert the missing braces if necessary
    • insert the missing from keyword if necessary
    • fix-up or insert this missing path if necessary
  • for a default import (import Identifi/**/)
    • complete the identifier
    • insert the missing from keyword if necessary
    • fix-up or insert this missing path if necessary
  • for a namespace import (import * as Identif/**/)
    • complete the identifier
    • insert the missing from keyword if necessary
    • fix-up or insert this missing path if necessary

Most of the same logic applies except for named exports, which I think we would not want to power this feature for.

One open question is how this works when you already have existing identifiers in a named import list - do you try to filter the possible module specifiers down? It's not strictly necessary since it's probably an edge case, but it would be a nice UX improvement to try that, and if no modules are found, fall back to module specifiers based on the current identifier.

Curious to hear feedback. We're also open to pull requests, though it's probably not a good first issue.

@jasonwilliams
Copy link

@jasonwilliams jasonwilliams commented Sep 15, 2020

This is great, i agree with the logic you have come up with for section 3.

One open question is how this works when you already have existing identifiers in a named import list - do you try to filter the possible module specifiers down? It's not strictly necessary since it's probably an edge case, but it would be a nice UX improvement to try that, and if no modules are found, fall back to module specifiers based on the current identifier.

I don't follow, isn't this logic already covered in 1?
If you have existing identifiers in a named import list then you've already set the path, so the logic can just remain as it is today.
But i feel like I'm missing something here.

@DanielRosenwasser
Copy link
Member

@DanielRosenwasser DanielRosenwasser commented Sep 16, 2020

If you have existing identifiers in a named import list then you've already set the path

Well, potentially not 🙂. That's why the filtering logic is probably unnecessary. Why would you ever be in this situation? But it would be neat if we could leverage existing intent.

@jasonwilliams
Copy link

@jasonwilliams jasonwilliams commented Sep 16, 2020

Why would you ever be in this situation?

The only scenario I can think of is if you start listing modules without providing the path. import {moda, modb, modc...}
Usually when you know what you want to import but haven't added the path yet, but. you want to add some more modules.

One open question is how this works when you already have existing identifiers in a named import list - do you try to filter the possible module specifiers down?

Im interested to know how you would do this without a path. How do you know what to filter down to? Do you base this off the modules that are already in the braces? Do you then use these heuristics to both:

  • Know what modules can be added
  • Provide the path to where these modules live
@DanielRosenwasser
Copy link
Member

@DanielRosenwasser DanielRosenwasser commented Sep 16, 2020

Today with auto-imports, we add an entry for every module that exports something with the appropriate name. That gets served up in the completion list.

In that proposal, you could start with that logic, then do a post-pass to look at what other identifiers are in the braces, and ask "does this module export all of these other identifiers as well?" If you end up with more than 0, those are your suggestions; if you end up with none, fall back to the original list for the current identifier.

@jasonwilliams
Copy link

@jasonwilliams jasonwilliams commented Sep 16, 2020

Today with auto-imports, we add an entry for every module that exports something with the appropriate name. That gets served up in the completion list.

For my understanding. What do these entries look like? Is this a list of filenames or paths for every file (module) that exports something? Or some inverted index of every module and the file it links back to.

If you end up with more than 0, those are your suggestions

When you say more than 0, are you referring to modules (files) that export those identifiers or the identifiers themselves? I’m assuming you’re referring to identifiers left from this file that matches as you’re within the braces so the path doesn’t matter at this point.

Assuming the above is correct there’s also enough information to autocomplete the from “./file/path/to/module” when adding the closing brace.
I agree that if 0 modules match you can fall back to today’s behaviour and just provide the (global completion?) list and do nothing on closing brace.

@andrewbranch
Copy link
Member

@andrewbranch andrewbranch commented Nov 20, 2020

Because React has no default export. You can’t write import React from "react" in TypeScript without esModuleInterop. That’s beside the point though; either the namespace or the default import are a fuzzier match than a named export.

@jasonwilliams
Copy link

@jasonwilliams jasonwilliams commented Nov 20, 2020

import { Rea why would someone type this? Are you saying if someone accidently started typing this thinking its a named import you would switch it?

As you said React doesn't exist as an export, so how would you know to swap it to import * as React from "react";. Or would you swap any import that does not exist to be `* as Etc?

Looking at import { Rea, I don't see how there's enough information here for you to infer i want import * as React from "react";

@DanielRosenwasser
Copy link
Member

@DanielRosenwasser DanielRosenwasser commented Nov 20, 2020

It's kind of like being proactive about people going down a mental garden path. A similar experience is using speech-to-text and realizing that you've started dictating a half-formed thought that isn't consistent with what you actually wanted to ask.

I don't know if we need to add it right away, but I do actually like the idea.

@andrewbranch
Copy link
Member

@andrewbranch andrewbranch commented Nov 20, 2020

Are you saying if someone accidently started typing this thinking its a named import you would switch it?

Right, this is the question.

how would you know to swap it to import * as React from "react"

Same way we know to offer it when you type React. Actually, in that case, the exported namespace (from the @types package) is named React, so we can identify that as a possible match. When the default/export assignment is anonymous, we camel case the module specifier and treat that as a possible match.

@andrewbranch
Copy link
Member

@andrewbranch andrewbranch commented Nov 24, 2020

I started prototyping this, and quickly realized that it might take a little bit of work on the editor side to do it properly. There were two possible approaches using the protocol API that exists today:

  1. Use a completion entry’s insertText and replacementSpan to add/replace more of the import statement than the current identifier. I didn’t try this approach because I identified several problems it would have:
    • The editor would not know it’s essentially an auto-import, so would not render the item with the module specifier. This is not too hard to patch on the editor side, but it would be a shame not to take advantage of the existing UI infrastructure.
    • We don’t always resolve the best module specifier at this stage; we wait until the details are requested for an individual item to do that, because we do a lot more work looking at reëxports and stuff. So, doing all that during the completions list request would be bad for perf.
    • The caret would jump to the end of the inserted text, when we really want it to stay at the end of the identifier that triggered the completions.
  2. Treat the completion entry like an auto-import (give it a source property), and then resolve different code actions on the details response to add/replace more of the import statement. This should solve all the problems with approach (1). However, it seems that the code actions around the completion conflict with the actual completion of the identifier. I was having trouble building VS Code locally so I couldn’t tell exactly what was happening, but neither calculating positions relative to the pre-completed identifier nor calculating them relative to the post-completed identifier worked consistently. It would probably be easiest if the completion entry details could overwrite the original entry’s insertText and replacementSpan, or specify that the code actions should be executed without actually completing the identifier, and/or indicate where the caret should end up after the completion is done. I’m not sure what would be best on the editor side. @mjbvz @jessetrinity @uniqueiniquity thoughts/ideas? Let me know if I need to explain further; I know this is probably hard to grok in text.
@andrewbranch
Copy link
Member

@andrewbranch andrewbranch commented Nov 25, 2020

I figured out exactly what VS Code is doing. Using the example where you want to go from import ca to import { camelCase } from "lodash", the process goes like this:

  1. User types import , no completions yet
  2. User types c. Completions are triggered, but camelCase isn’t the top one yet, so details are not resolved.
  3. User types a. That filtering makes camelCase the top hit, so details are resolved. During this process we build up the code actions to be applied. The first edit is to insert {  at position 7 (at ca). The second is to insert } from "lodash" at position 9 (at the end of ca).
  4. VS Code applies the code actions from the completion item. So it splices in the first edit (+ shows the position of insertions and ^ shows the position of the caret):
    import { ca
           +   ^
    
    then the second:
    import { ca } from "lodash"
               +               ^
    
    Already we have a problem: the caret moved to the end of the statement. Since our insertion was at the position of the caret, it was ambiguous whether the edit should have gone before or after it.
  5. VS Code then applies the actual identifier completion edit. What it intends to do is replace the ca that was already typed with camelCase. But what actually happens is it replaces the two characters behind the caret with camelCase:
    import { ca } from "lodascamelCase

It would solve the problem if we could make VS Code apply the second edit after the caret position instead of moving the caret to the right with the edit. However, there’s another problem: VS Code may not re-request details with every keystroke, so we can also end up in a position where we calculate the edits to be in the exact right place for the partially-typed identifier ca, but if the user quickly types mel and then hits tab before VS Code requests new details, the edits we calculated are now out of date, and will try to insert at an incorrect position.

So, I don’t think this feature is possible without a new editor API to support it. And if we add something to the completions API, I’m not sure what implications that has for the LSP side of things on VS.

@jasonwilliams
Copy link

@jasonwilliams jasonwilliams commented Nov 25, 2020

So, I don’t think this feature is possible without a new editor API to support it. And if we add something to the completions API, I’m not sure what implications that has for the LSP side of things on VS.

@andrewbranch it sounds like this should be broken off as a separate feature (but still something that should be pursued). From what i understand #31658 (comment) is a nice to have. Is #31658 (comment) still possible to do without VSCode changes? (I would assume it is as we already do this sort of intellisense/editing in other parts of the file)

@andrewbranch
Copy link
Member

@andrewbranch andrewbranch commented Nov 26, 2020

No, every possible version of this whole feature would be affected by the problems I've outlined. It all needs editor work to be done well.

@andrewbranch
Copy link
Member

@andrewbranch andrewbranch commented Nov 30, 2020

Well, I should clarify: we could make it work only for named imports, and only for people who have brace completion turned on. We can go from import { ca } to import { camelCase } from "lodash", but we cannot go from import { ca to import { camelCase } from "lodash". The important factor is whether or not the from "lodash" insertion goes after the caret or right on top of it. And that doesn’t seem like a good enough solution to settle for.

@jasonwilliams
Copy link

@jasonwilliams jasonwilliams commented Nov 30, 2020

Well, I should clarify: we could make it work only for named imports, and only for people who have brace completion turned on.

I’m sure it’s on by default, I’ve never switched it on so I’m assuming almost everyone has this on. (I can only speak for VSCode)

We can go from import { ca } to import { camelCase } from "lodash", but we cannot go from import { ca to import { camelCase } from "lodash"

When I type import { I get the closing } brace automatically, so then I start typing the names import, this alone would be a big improvement over nothing at all. I’m a sample set of one but for me it seems worth doing just for that.

The important factor is whether or not the from "lodash" insertion goes after the caret or right on top of it

Are you referring to the cursor? If so I would expect it to be on the end of the line after the insert so the latter.

And that doesn’t seem like a good enough solution to settle for.

Yeah I agree what was showed in the thread is better. But if that’s not possible offering autocomplete between the braces is still a big upgrade compared to what exists today, and would be a shame to throw that out.

@andrewbranch
Copy link
Member

@andrewbranch andrewbranch commented Dec 2, 2020

@uniqueiniquity pointed me toward the upcoming (3.16) LSP spec that can handle this nicely, through the combination of two mechanisms:

  1. The resolveSupport client capability means that we could lazily resolve (i.e., get during the details request) not just code actions, but any CompletionItem properties, like textEdit (which is analogous to TS Server’s insertionText and replacementRange). This means we could take this more straightforward route of specifying a single edit that replaces the default behavior of the identifier completion without worrying about the performance concerns I mentioned earlier of calculating the exact edit up front.
  2. It appears that the snippetSupport client capability, combined with the insertTextFormat CompletionItem property, would allow us to control where the text insertion cursor lands.

Together, these two features solve all of the problems I mentioned earlier. Implementing the former in typescript-langague-features for VS Code should be trivial, and it should either Just Work™ or also be trivial for VS, since VS is already speaking LSP to its TypeScript support layer.

Unfortunately, I couldn’t find anything resembling the latter in vscode.CompletionItem, and I believe @uniqueiniquity told me he doesn’t think it exists yet in VS either. That part might take some investment from editor teams, as it would reach outside the scope of my knowledge for either editor. But since it’s a first-class LSP API, I feel fairly confident this is a good way forward for an implementation, and controlling the text insertion cursor position may not be a blocker for the feature, since it could be added later when clients support it without redesigning the implementation.

@NTaylorMullen
Copy link
Member

@NTaylorMullen NTaylorMullen commented Dec 3, 2020

  1. The resolveSupport client capability means that we could lazily resolve (i.e., get during the details request) not just code actions, but any CompletionItem properties, like textEdit (which is analogous to TS Server’s insertionText and replacementRange). This means we could take this more straightforward route of specifying a single edit that replaces the default behavior of the identifier completion without worrying about the performance concerns I mentioned earlier of calculating the exact edit up front.

Note that this is not completely true. Resolve lets you lazily evaluate a lot; however, most platforms will not allow you to lazily evaluate the text edit because it impacts how a completion item is evaluated at display time. For instance in the spec acknowledges this as:

All other properties (usually sortText, filterText, insertText and textEdit) must be provided in the textDocument/completion response and must not be changed during resolve.
@uniqueiniquity
Copy link
Member

@uniqueiniquity uniqueiniquity commented Dec 4, 2020

@NTaylorMullen I thought that was the point of the new resolveSupport client capability, though?
That is, the platform could specify that it does allow it.

@uniqueiniquity
Copy link
Member

@uniqueiniquity uniqueiniquity commented Dec 4, 2020

Notably, VS does allow for this, and should be specifying as such in the client capabilities.

@andrewbranch
Copy link
Member

@andrewbranch andrewbranch commented Dec 4, 2020

The full text of the discussion is

By default the request can only delay the computation of the detail and documentation properties. Since 3.16.0 the client can signal that it can resolve more properties lazily. This is done using the completionItem#resolveSupport client capability which lists all properties that can be filled in during a completionItem/resolve request. All other properties (usually sortText, filterText, insertText and textEdit) must be provided in the textDocument/completion response and must not be changed during resolve.

I read “all other properties” to mean “all properties not specified by resolveSupport.” Without any other indication from the spec, I would expect the only property that could not be resolved lazily is label, since that is the only required property in CompletionItem. Is that not the case? Why would the textEdit be needed up front to display the item in the completion list?

@NTaylorMullen
Copy link
Member

@NTaylorMullen NTaylorMullen commented Dec 4, 2020

@NTaylorMullen I thought that was the point of the new resolveSupport client capability, though?
That is, the platform could specify that it does allow it.
Notably, VS does allow for this, and should be specifying as such in the client capabilities.

Wasn't sure if VS did but I know that VSCode didn't not that long ago 😄, @333fred went down that path and ran into headaches.

I read “all other properties” to mean “all properties not specified by resolveSupport.” Without any other indication from the spec, I would expect the only property that could not be resolved lazily is label, since that is the only required property in CompletionItem. Is that not the case? Why would the textEdit be needed up front to display the item in the completion list?

I should have been more specific. Yes the resolveSupport can specify that the edit can be replaced; however, in VSCode it doesn't which I presume is a blocker?

Why would the textEdit be needed up front to display the item in the completion list?

From my understanding it influences when to show/dismiss the completion list based off of the potential "replacable" completion span. I don't 100% recall though to be honest.

@andrewbranch
Copy link
Member

@andrewbranch andrewbranch commented Dec 4, 2020

Looking at the typescript-language-features source code for VS Code, I don’t see why it would be a problem to set vscode.CompletionItem’s insertText and replacementRange (which together are analogous to LSP’s textEdit) during the details resolution, but I haven’t tried it yet. At this stage I’m happy to move forward as long as the functionality is theoretically supported by the relevant specs, as I think I’m capable of putting the missing pieces into VS Code myself. Maybe I’ll find out why it’s a problem soon 😄

@NTaylorMullen
Copy link
Member

@NTaylorMullen NTaylorMullen commented Dec 4, 2020

I don’t see why it would be a problem to set vscode.CompletionItem’s insertText and replacementRange (which together are analogous to LSP’s textEdit) during the details resolution, but I haven’t tried it yet.

I'd also imagine in VSCode that insertText would also be required to be in its "final" state on the initial completion request unfortunately. That being said if the TypeScript code is factored in a way where updating one platform doesn't require updating another then you're good! 😄

@333fred
Copy link
Member

@333fred 333fred commented Dec 4, 2020

Wasn't sure if VS did but I know that VSCode didn't not that long ago 😄, @333fred went down that path and ran into headaches.

Indeed, vscode only supports resolving detail, documentation, and additionalTextEdits. https://github.com/microsoft/vscode-languageserver-node/blob/f1bc8dba5b8192ce8730aaeb05ce823cfff8c9b5/client/src/common/client.ts#L1514

@andrewbranch
Copy link
Member

@andrewbranch andrewbranch commented Dec 4, 2020

This might be where the confusion lies—TypeScript does not speak LSP to VS Code. As far as I can tell, I could make any imperative change in here in conjunction with a TS Server protocol change.

@NTaylorMullen
Copy link
Member

@NTaylorMullen NTaylorMullen commented Dec 4, 2020

This might be where the confusion lies—TypeScript does not speak LSP to VS Code. As far as I can tell, I could make any imperative change in here in conjunction with a TS Server protocol change.

Don't you wire your "custom" LSP messages up via VSCode's hooks though? Aka their completion resolve hook? If that's the case it'd have the same restrictions.

@andrewbranch
Copy link
Member

@andrewbranch andrewbranch commented Dec 9, 2020

Only a couple possible ways forward here. It seems like we have to calculate the full edit for all completions we offer up front, which adds a little bit of overhead¹ for module specifier resolution on every item. Options are

  1. Don’t worry about the overhead, since it likely won’t be noticeable for users with a smaller number of dependencies, and anything is an improvement over what we do now, which is offer no completions.
  2. Use a stricter filter than we currently do to reduce the number of completions returned. For normal auto-import completions, we don’t process any symbols whose names do not contain the token text characters in order, case-insensitive. We could use a less fuzzy match, or simply put a hard limit on how many auto-importable symbols we’ll offer. We could also wait to return any completions until more than one character is available to filter by or completions are explicitly requested, though this might violate expectations (any time completions don’t show up when you expect them to feels jarring).
  3. Abandon this issue, and revive it if clients ever support lazily resolving the main edit in the future.

¹ Or currently, a lot of bit for pnpm users, a la #40584

@DanielRosenwasser
Copy link
Member

@DanielRosenwasser DanielRosenwasser commented Dec 9, 2020

Can we be concrete about the specific overhead of each item? From the top level scope, we already provide auto import completions, so what sort of work has to be done per module to generate the full statement?

@DanielRosenwasser
Copy link
Member

@DanielRosenwasser DanielRosenwasser commented Dec 9, 2020

Use a stricter filter than we currently do to reduce the number of completions returned. For normal auto-import completions, we don’t process any symbols whose names do not contain the token text characters in order, case-insensitive.

I think this might be reasonable here too.

@andrewbranch
Copy link
Member

@andrewbranch andrewbranch commented Dec 9, 2020

Can we be concrete about the specific overhead of each item?

When auto import completions are not cached, the details request for a single item is usually an order of magnitude faster than the completion list request. Obviously there are huge variables that could affect this relationship, but that’s typical based on old notes of mine. Some portion of the details request is spent redoing work that was done in the completion list request, and the rest is spent on module specifier resolution and building display parts and code actions—so if we tried to do all that work in the completion list request, we’d be able to skip the former part. But let’s still call the details work an order of magnitude faster than the full list work—200ms for the list and 20ms for the details might be a ballpark guess for a small program. This order of magnitude relationship means the total response time for the list request could easily double if we tried to do the details work for 10 auto imports.

My gut feeling is that analysis is coming out overly pessimistic for typical cases, but 10 auto imports is not very many. One of the things that can make module specifier resolution more complicated is symlinks, which play a big role in lerna, yarn workspaces, and pnpm. I don’t want to write a feature that appears miserably slow for power users with big multi-project workspaces, even if it is purely adding functionality where none exists now. I’m open to a smarter filter only if the experience feels really seamless. But part of my hesitation is that this is a non-trivial amount of work that would largely not be reusable if the limitation of not being able to lazily resolve the main edit were removed. Conversely, because normal auto-imports are lazily resolved now, this would be a pretty small feature that could reuse a lot of existing code if that were an option for these kinds of completions. So trying to resolve everything eagerly feels risky, because it’s a decent amount of work and there’s likely to be a performance/experience tradeoff.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
9 participants